00:00:00.001 Started by upstream project "autotest-per-patch" build number 126258 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.044 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.044 The recommended git tool is: git 00:00:00.044 using credential 00000000-0000-0000-0000-000000000002 00:00:00.046 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.061 Fetching changes from the remote Git repository 00:00:00.069 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.092 Using shallow fetch with depth 1 00:00:00.092 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.092 > git --version # timeout=10 00:00:00.122 > git --version # 'git version 2.39.2' 00:00:00.122 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.157 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.157 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.184 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.197 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.210 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:03.210 > git config core.sparsecheckout # timeout=10 00:00:03.222 > git read-tree -mu HEAD # timeout=10 00:00:03.240 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:03.261 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:03.261 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:03.350 [Pipeline] Start of Pipeline 00:00:03.367 [Pipeline] library 00:00:03.368 Loading library shm_lib@master 00:00:03.369 Library shm_lib@master is cached. Copying from home. 00:00:03.383 [Pipeline] node 00:00:03.389 Running on VM-host-WFP1 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:03.391 [Pipeline] { 00:00:03.400 [Pipeline] catchError 00:00:03.402 [Pipeline] { 00:00:03.411 [Pipeline] wrap 00:00:03.419 [Pipeline] { 00:00:03.424 [Pipeline] stage 00:00:03.426 [Pipeline] { (Prologue) 00:00:03.443 [Pipeline] echo 00:00:03.444 Node: VM-host-WFP1 00:00:03.448 [Pipeline] cleanWs 00:00:03.457 [WS-CLEANUP] Deleting project workspace... 00:00:03.457 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.463 [WS-CLEANUP] done 00:00:03.642 [Pipeline] setCustomBuildProperty 00:00:03.711 [Pipeline] httpRequest 00:00:03.727 [Pipeline] echo 00:00:03.729 Sorcerer 10.211.164.101 is alive 00:00:03.737 [Pipeline] httpRequest 00:00:03.742 HttpMethod: GET 00:00:03.742 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:03.743 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:03.744 Response Code: HTTP/1.1 200 OK 00:00:03.745 Success: Status code 200 is in the accepted range: 200,404 00:00:03.745 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.301 [Pipeline] sh 00:00:04.584 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.597 [Pipeline] httpRequest 00:00:04.609 [Pipeline] echo 00:00:04.611 Sorcerer 10.211.164.101 is alive 00:00:04.617 [Pipeline] httpRequest 00:00:04.620 HttpMethod: GET 00:00:04.621 URL: http://10.211.164.101/packages/spdk_fcbf7f00f90897a2010e8a76ac5195a2d8aaa949.tar.gz 00:00:04.621 Sending request to url: http://10.211.164.101/packages/spdk_fcbf7f00f90897a2010e8a76ac5195a2d8aaa949.tar.gz 00:00:04.634 Response Code: HTTP/1.1 200 OK 00:00:04.634 Success: Status code 200 is in the accepted range: 200,404 00:00:04.635 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk_fcbf7f00f90897a2010e8a76ac5195a2d8aaa949.tar.gz 00:00:31.319 [Pipeline] sh 00:00:31.597 + tar --no-same-owner -xf spdk_fcbf7f00f90897a2010e8a76ac5195a2d8aaa949.tar.gz 00:00:34.144 [Pipeline] sh 00:00:34.423 + git -C spdk log --oneline -n5 00:00:34.423 fcbf7f00f bdev/nvme: show `numa_socket_id` for bdev_nvme_get_controllers 00:00:34.423 47ca8c1aa nvme: populate socket_id for rdma controllers 00:00:34.423 c1860effd nvme: populate socket_id for tcp controllers 00:00:34.423 91f51bb85 nvme: populate socket_id for pcie controllers 00:00:34.423 c9ef451fa nvme: add spdk_nvme_ctrlr_get_socket_id() 00:00:34.447 [Pipeline] writeFile 00:00:34.467 [Pipeline] sh 00:00:34.751 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:34.764 [Pipeline] sh 00:00:35.047 + cat autorun-spdk.conf 00:00:35.047 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.047 SPDK_TEST_NVMF=1 00:00:35.047 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:35.047 SPDK_TEST_URING=1 00:00:35.047 SPDK_TEST_USDT=1 00:00:35.047 SPDK_RUN_UBSAN=1 00:00:35.047 NET_TYPE=virt 00:00:35.047 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:35.054 RUN_NIGHTLY=0 00:00:35.056 [Pipeline] } 00:00:35.075 [Pipeline] // stage 00:00:35.097 [Pipeline] stage 00:00:35.099 [Pipeline] { (Run VM) 00:00:35.115 [Pipeline] sh 00:00:35.398 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:35.398 + echo 'Start stage prepare_nvme.sh' 00:00:35.398 Start stage prepare_nvme.sh 00:00:35.398 + [[ -n 6 ]] 00:00:35.398 + disk_prefix=ex6 00:00:35.398 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 ]] 00:00:35.398 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf ]] 00:00:35.398 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf 00:00:35.398 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.398 ++ SPDK_TEST_NVMF=1 00:00:35.398 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:35.398 ++ SPDK_TEST_URING=1 00:00:35.398 ++ SPDK_TEST_USDT=1 00:00:35.398 ++ SPDK_RUN_UBSAN=1 00:00:35.398 ++ NET_TYPE=virt 00:00:35.398 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:35.398 ++ RUN_NIGHTLY=0 00:00:35.398 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:35.398 + nvme_files=() 00:00:35.398 + declare -A nvme_files 00:00:35.398 + backend_dir=/var/lib/libvirt/images/backends 00:00:35.398 + nvme_files['nvme.img']=5G 00:00:35.398 + nvme_files['nvme-cmb.img']=5G 00:00:35.398 + nvme_files['nvme-multi0.img']=4G 00:00:35.398 + nvme_files['nvme-multi1.img']=4G 00:00:35.398 + nvme_files['nvme-multi2.img']=4G 00:00:35.398 + nvme_files['nvme-openstack.img']=8G 00:00:35.398 + nvme_files['nvme-zns.img']=5G 00:00:35.398 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:35.398 + (( SPDK_TEST_FTL == 1 )) 00:00:35.398 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:35.398 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:35.398 + for nvme in "${!nvme_files[@]}" 00:00:35.398 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:00:35.398 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:35.398 + for nvme in "${!nvme_files[@]}" 00:00:35.398 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:00:35.398 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:35.398 + for nvme in "${!nvme_files[@]}" 00:00:35.398 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:00:35.398 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:35.398 + for nvme in "${!nvme_files[@]}" 00:00:35.398 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:00:35.398 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:35.398 + for nvme in "${!nvme_files[@]}" 00:00:35.398 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:00:35.398 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:35.398 + for nvme in "${!nvme_files[@]}" 00:00:35.398 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:00:35.656 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:35.656 + for nvme in "${!nvme_files[@]}" 00:00:35.656 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:00:35.656 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:35.656 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:00:35.656 + echo 'End stage prepare_nvme.sh' 00:00:35.656 End stage prepare_nvme.sh 00:00:35.669 [Pipeline] sh 00:00:35.951 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:35.951 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora38 00:00:35.951 00:00:35.951 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant 00:00:35.951 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk 00:00:35.951 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:35.951 HELP=0 00:00:35.951 DRY_RUN=0 00:00:35.951 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:00:35.951 NVME_DISKS_TYPE=nvme,nvme, 00:00:35.951 NVME_AUTO_CREATE=0 00:00:35.951 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:00:35.951 NVME_CMB=,, 00:00:35.951 NVME_PMR=,, 00:00:35.951 NVME_ZNS=,, 00:00:35.951 NVME_MS=,, 00:00:35.951 NVME_FDP=,, 00:00:35.951 SPDK_VAGRANT_DISTRO=fedora38 00:00:35.951 SPDK_VAGRANT_VMCPU=10 00:00:35.951 SPDK_VAGRANT_VMRAM=12288 00:00:35.951 SPDK_VAGRANT_PROVIDER=libvirt 00:00:35.951 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:35.951 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:35.951 SPDK_OPENSTACK_NETWORK=0 00:00:35.951 VAGRANT_PACKAGE_BOX=0 00:00:35.951 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:35.951 FORCE_DISTRO=true 00:00:35.951 VAGRANT_BOX_VERSION= 00:00:35.951 EXTRA_VAGRANTFILES= 00:00:35.951 NIC_MODEL=e1000 00:00:35.951 00:00:35.951 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt' 00:00:35.951 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:38.484 Bringing machine 'default' up with 'libvirt' provider... 00:00:39.860 ==> default: Creating image (snapshot of base box volume). 00:00:40.118 ==> default: Creating domain with the following settings... 00:00:40.118 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721081512_e60be89011e560cc3d4c 00:00:40.118 ==> default: -- Domain type: kvm 00:00:40.118 ==> default: -- Cpus: 10 00:00:40.118 ==> default: -- Feature: acpi 00:00:40.118 ==> default: -- Feature: apic 00:00:40.118 ==> default: -- Feature: pae 00:00:40.118 ==> default: -- Memory: 12288M 00:00:40.118 ==> default: -- Memory Backing: hugepages: 00:00:40.118 ==> default: -- Management MAC: 00:00:40.118 ==> default: -- Loader: 00:00:40.118 ==> default: -- Nvram: 00:00:40.118 ==> default: -- Base box: spdk/fedora38 00:00:40.118 ==> default: -- Storage pool: default 00:00:40.118 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721081512_e60be89011e560cc3d4c.img (20G) 00:00:40.118 ==> default: -- Volume Cache: default 00:00:40.118 ==> default: -- Kernel: 00:00:40.118 ==> default: -- Initrd: 00:00:40.118 ==> default: -- Graphics Type: vnc 00:00:40.118 ==> default: -- Graphics Port: -1 00:00:40.118 ==> default: -- Graphics IP: 127.0.0.1 00:00:40.118 ==> default: -- Graphics Password: Not defined 00:00:40.118 ==> default: -- Video Type: cirrus 00:00:40.118 ==> default: -- Video VRAM: 9216 00:00:40.118 ==> default: -- Sound Type: 00:00:40.118 ==> default: -- Keymap: en-us 00:00:40.118 ==> default: -- TPM Path: 00:00:40.118 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:40.118 ==> default: -- Command line args: 00:00:40.118 ==> default: -> value=-device, 00:00:40.118 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:40.118 ==> default: -> value=-drive, 00:00:40.118 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:00:40.118 ==> default: -> value=-device, 00:00:40.118 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.118 ==> default: -> value=-device, 00:00:40.118 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:40.118 ==> default: -> value=-drive, 00:00:40.118 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:40.118 ==> default: -> value=-device, 00:00:40.118 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.118 ==> default: -> value=-drive, 00:00:40.118 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:40.118 ==> default: -> value=-device, 00:00:40.118 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.118 ==> default: -> value=-drive, 00:00:40.118 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:40.118 ==> default: -> value=-device, 00:00:40.118 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.376 ==> default: Creating shared folders metadata... 00:00:40.376 ==> default: Starting domain. 00:00:41.752 ==> default: Waiting for domain to get an IP address... 00:00:59.833 ==> default: Waiting for SSH to become available... 00:00:59.833 ==> default: Configuring and enabling network interfaces... 00:01:04.042 default: SSH address: 192.168.121.132:22 00:01:04.042 default: SSH username: vagrant 00:01:04.042 default: SSH auth method: private key 00:01:06.604 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:14.834 ==> default: Mounting SSHFS shared folder... 00:01:17.364 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:17.364 ==> default: Checking Mount.. 00:01:18.738 ==> default: Folder Successfully Mounted! 00:01:18.738 ==> default: Running provisioner: file... 00:01:20.112 default: ~/.gitconfig => .gitconfig 00:01:20.369 00:01:20.369 SUCCESS! 00:01:20.369 00:01:20.369 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:01:20.369 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:20.369 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:01:20.369 00:01:20.381 [Pipeline] } 00:01:20.400 [Pipeline] // stage 00:01:20.410 [Pipeline] dir 00:01:20.411 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt 00:01:20.412 [Pipeline] { 00:01:20.426 [Pipeline] catchError 00:01:20.428 [Pipeline] { 00:01:20.443 [Pipeline] sh 00:01:20.721 + vagrant ssh-config --host vagrant 00:01:20.721 + sed -ne /^Host/,$p 00:01:20.721 + tee ssh_conf 00:01:24.018 Host vagrant 00:01:24.018 HostName 192.168.121.132 00:01:24.018 User vagrant 00:01:24.018 Port 22 00:01:24.018 UserKnownHostsFile /dev/null 00:01:24.018 StrictHostKeyChecking no 00:01:24.018 PasswordAuthentication no 00:01:24.018 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:24.018 IdentitiesOnly yes 00:01:24.018 LogLevel FATAL 00:01:24.018 ForwardAgent yes 00:01:24.018 ForwardX11 yes 00:01:24.018 00:01:24.031 [Pipeline] withEnv 00:01:24.033 [Pipeline] { 00:01:24.047 [Pipeline] sh 00:01:24.329 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:24.329 source /etc/os-release 00:01:24.329 [[ -e /image.version ]] && img=$(< /image.version) 00:01:24.329 # Minimal, systemd-like check. 00:01:24.329 if [[ -e /.dockerenv ]]; then 00:01:24.329 # Clear garbage from the node's name: 00:01:24.329 # agt-er_autotest_547-896 -> autotest_547-896 00:01:24.329 # $HOSTNAME is the actual container id 00:01:24.329 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:24.329 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:24.329 # We can assume this is a mount from a host where container is running, 00:01:24.329 # so fetch its hostname to easily identify the target swarm worker. 00:01:24.329 container="$(< /etc/hostname) ($agent)" 00:01:24.329 else 00:01:24.329 # Fallback 00:01:24.329 container=$agent 00:01:24.329 fi 00:01:24.329 fi 00:01:24.329 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:24.329 00:01:24.600 [Pipeline] } 00:01:24.620 [Pipeline] // withEnv 00:01:24.629 [Pipeline] setCustomBuildProperty 00:01:24.644 [Pipeline] stage 00:01:24.646 [Pipeline] { (Tests) 00:01:24.669 [Pipeline] sh 00:01:24.957 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:25.237 [Pipeline] sh 00:01:25.535 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:25.827 [Pipeline] timeout 00:01:25.827 Timeout set to expire in 30 min 00:01:25.829 [Pipeline] { 00:01:25.841 [Pipeline] sh 00:01:26.174 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:26.763 HEAD is now at fcbf7f00f bdev/nvme: show `numa_socket_id` for bdev_nvme_get_controllers 00:01:26.777 [Pipeline] sh 00:01:27.052 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:27.321 [Pipeline] sh 00:01:27.600 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:27.877 [Pipeline] sh 00:01:28.157 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:28.415 ++ readlink -f spdk_repo 00:01:28.415 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:28.415 + [[ -n /home/vagrant/spdk_repo ]] 00:01:28.415 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:28.415 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:28.415 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:28.415 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:28.415 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:28.415 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:28.415 + cd /home/vagrant/spdk_repo 00:01:28.415 + source /etc/os-release 00:01:28.415 ++ NAME='Fedora Linux' 00:01:28.415 ++ VERSION='38 (Cloud Edition)' 00:01:28.415 ++ ID=fedora 00:01:28.415 ++ VERSION_ID=38 00:01:28.415 ++ VERSION_CODENAME= 00:01:28.415 ++ PLATFORM_ID=platform:f38 00:01:28.415 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:28.415 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:28.415 ++ LOGO=fedora-logo-icon 00:01:28.415 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:28.415 ++ HOME_URL=https://fedoraproject.org/ 00:01:28.415 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:28.415 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:28.415 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:28.415 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:28.415 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:28.415 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:28.415 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:28.415 ++ SUPPORT_END=2024-05-14 00:01:28.415 ++ VARIANT='Cloud Edition' 00:01:28.415 ++ VARIANT_ID=cloud 00:01:28.415 + uname -a 00:01:28.415 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:28.415 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:28.982 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:28.982 Hugepages 00:01:28.982 node hugesize free / total 00:01:28.982 node0 1048576kB 0 / 0 00:01:28.982 node0 2048kB 0 / 0 00:01:28.982 00:01:28.982 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:28.982 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:28.982 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:28.982 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:28.982 + rm -f /tmp/spdk-ld-path 00:01:28.982 + source autorun-spdk.conf 00:01:28.982 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.982 ++ SPDK_TEST_NVMF=1 00:01:28.982 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.982 ++ SPDK_TEST_URING=1 00:01:28.982 ++ SPDK_TEST_USDT=1 00:01:28.982 ++ SPDK_RUN_UBSAN=1 00:01:28.982 ++ NET_TYPE=virt 00:01:28.982 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:28.982 ++ RUN_NIGHTLY=0 00:01:28.982 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:28.982 + [[ -n '' ]] 00:01:28.982 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:29.241 + for M in /var/spdk/build-*-manifest.txt 00:01:29.241 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:29.241 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:29.241 + for M in /var/spdk/build-*-manifest.txt 00:01:29.241 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:29.241 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:29.241 ++ uname 00:01:29.241 + [[ Linux == \L\i\n\u\x ]] 00:01:29.241 + sudo dmesg -T 00:01:29.241 + sudo dmesg --clear 00:01:29.241 + dmesg_pid=5107 00:01:29.241 + sudo dmesg -Tw 00:01:29.241 + [[ Fedora Linux == FreeBSD ]] 00:01:29.241 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:29.241 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:29.241 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:29.241 + [[ -x /usr/src/fio-static/fio ]] 00:01:29.241 + export FIO_BIN=/usr/src/fio-static/fio 00:01:29.241 + FIO_BIN=/usr/src/fio-static/fio 00:01:29.241 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:29.241 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:29.241 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:29.241 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:29.241 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:29.241 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:29.241 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:29.241 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:29.241 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:29.241 Test configuration: 00:01:29.241 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.241 SPDK_TEST_NVMF=1 00:01:29.241 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:29.241 SPDK_TEST_URING=1 00:01:29.241 SPDK_TEST_USDT=1 00:01:29.241 SPDK_RUN_UBSAN=1 00:01:29.241 NET_TYPE=virt 00:01:29.241 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:29.241 RUN_NIGHTLY=0 22:12:42 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:29.241 22:12:42 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:29.241 22:12:42 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:29.241 22:12:42 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:29.241 22:12:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.241 22:12:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.241 22:12:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.241 22:12:42 -- paths/export.sh@5 -- $ export PATH 00:01:29.241 22:12:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.241 22:12:42 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:29.241 22:12:42 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:29.241 22:12:42 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721081562.XXXXXX 00:01:29.241 22:12:42 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721081562.hL58BN 00:01:29.241 22:12:42 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:29.241 22:12:42 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:29.241 22:12:42 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:29.241 22:12:42 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:29.241 22:12:42 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:29.241 22:12:42 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:29.241 22:12:42 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:29.241 22:12:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.499 22:12:42 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:29.499 22:12:42 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:29.499 22:12:42 -- pm/common@17 -- $ local monitor 00:01:29.499 22:12:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:29.499 22:12:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:29.499 22:12:42 -- pm/common@21 -- $ date +%s 00:01:29.499 22:12:42 -- pm/common@25 -- $ sleep 1 00:01:29.499 22:12:42 -- pm/common@21 -- $ date +%s 00:01:29.499 22:12:42 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721081562 00:01:29.499 22:12:42 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721081562 00:01:29.499 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721081562_collect-vmstat.pm.log 00:01:29.499 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721081562_collect-cpu-load.pm.log 00:01:30.435 22:12:43 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:30.435 22:12:43 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:30.435 22:12:43 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:30.435 22:12:43 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:30.435 22:12:43 -- spdk/autobuild.sh@16 -- $ date -u 00:01:30.435 Mon Jul 15 10:12:43 PM UTC 2024 00:01:30.435 22:12:43 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:30.435 v24.09-pre-234-gfcbf7f00f 00:01:30.435 22:12:43 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:30.435 22:12:43 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:30.435 22:12:43 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:30.435 22:12:43 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:30.435 22:12:43 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:30.435 22:12:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.435 ************************************ 00:01:30.435 START TEST ubsan 00:01:30.435 ************************************ 00:01:30.435 using ubsan 00:01:30.435 22:12:43 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:30.435 00:01:30.435 real 0m0.000s 00:01:30.435 user 0m0.000s 00:01:30.435 sys 0m0.000s 00:01:30.435 22:12:43 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:30.435 22:12:43 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:30.435 ************************************ 00:01:30.435 END TEST ubsan 00:01:30.435 ************************************ 00:01:30.435 22:12:43 -- common/autotest_common.sh@1142 -- $ return 0 00:01:30.435 22:12:43 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:30.435 22:12:43 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:30.435 22:12:43 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:30.435 22:12:43 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:30.435 22:12:43 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:30.435 22:12:43 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:30.435 22:12:43 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:30.435 22:12:43 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:30.435 22:12:43 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:30.693 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:30.693 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:31.262 Using 'verbs' RDMA provider 00:01:47.117 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:05.229 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:05.229 Creating mk/config.mk...done. 00:02:05.229 Creating mk/cc.flags.mk...done. 00:02:05.229 Type 'make' to build. 00:02:05.229 22:13:16 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:05.229 22:13:16 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:05.229 22:13:16 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:05.229 22:13:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.229 ************************************ 00:02:05.229 START TEST make 00:02:05.229 ************************************ 00:02:05.229 22:13:16 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:05.229 make[1]: Nothing to be done for 'all'. 00:02:13.382 The Meson build system 00:02:13.382 Version: 1.3.1 00:02:13.382 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:13.382 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:13.382 Build type: native build 00:02:13.382 Program cat found: YES (/usr/bin/cat) 00:02:13.382 Project name: DPDK 00:02:13.382 Project version: 24.03.0 00:02:13.382 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:13.382 C linker for the host machine: cc ld.bfd 2.39-16 00:02:13.382 Host machine cpu family: x86_64 00:02:13.382 Host machine cpu: x86_64 00:02:13.382 Message: ## Building in Developer Mode ## 00:02:13.382 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:13.382 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:13.382 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:13.382 Program python3 found: YES (/usr/bin/python3) 00:02:13.382 Program cat found: YES (/usr/bin/cat) 00:02:13.382 Compiler for C supports arguments -march=native: YES 00:02:13.382 Checking for size of "void *" : 8 00:02:13.382 Checking for size of "void *" : 8 (cached) 00:02:13.382 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:13.382 Library m found: YES 00:02:13.382 Library numa found: YES 00:02:13.382 Has header "numaif.h" : YES 00:02:13.382 Library fdt found: NO 00:02:13.382 Library execinfo found: NO 00:02:13.382 Has header "execinfo.h" : YES 00:02:13.382 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:13.382 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:13.382 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:13.382 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:13.382 Run-time dependency openssl found: YES 3.0.9 00:02:13.382 Run-time dependency libpcap found: YES 1.10.4 00:02:13.382 Has header "pcap.h" with dependency libpcap: YES 00:02:13.382 Compiler for C supports arguments -Wcast-qual: YES 00:02:13.382 Compiler for C supports arguments -Wdeprecated: YES 00:02:13.382 Compiler for C supports arguments -Wformat: YES 00:02:13.382 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:13.382 Compiler for C supports arguments -Wformat-security: NO 00:02:13.382 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:13.382 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:13.382 Compiler for C supports arguments -Wnested-externs: YES 00:02:13.382 Compiler for C supports arguments -Wold-style-definition: YES 00:02:13.382 Compiler for C supports arguments -Wpointer-arith: YES 00:02:13.382 Compiler for C supports arguments -Wsign-compare: YES 00:02:13.382 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:13.382 Compiler for C supports arguments -Wundef: YES 00:02:13.382 Compiler for C supports arguments -Wwrite-strings: YES 00:02:13.382 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:13.382 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:13.382 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:13.382 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:13.382 Program objdump found: YES (/usr/bin/objdump) 00:02:13.382 Compiler for C supports arguments -mavx512f: YES 00:02:13.382 Checking if "AVX512 checking" compiles: YES 00:02:13.382 Fetching value of define "__SSE4_2__" : 1 00:02:13.382 Fetching value of define "__AES__" : 1 00:02:13.382 Fetching value of define "__AVX__" : 1 00:02:13.382 Fetching value of define "__AVX2__" : 1 00:02:13.382 Fetching value of define "__AVX512BW__" : 1 00:02:13.382 Fetching value of define "__AVX512CD__" : 1 00:02:13.382 Fetching value of define "__AVX512DQ__" : 1 00:02:13.382 Fetching value of define "__AVX512F__" : 1 00:02:13.382 Fetching value of define "__AVX512VL__" : 1 00:02:13.382 Fetching value of define "__PCLMUL__" : 1 00:02:13.382 Fetching value of define "__RDRND__" : 1 00:02:13.382 Fetching value of define "__RDSEED__" : 1 00:02:13.382 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:13.382 Fetching value of define "__znver1__" : (undefined) 00:02:13.382 Fetching value of define "__znver2__" : (undefined) 00:02:13.382 Fetching value of define "__znver3__" : (undefined) 00:02:13.382 Fetching value of define "__znver4__" : (undefined) 00:02:13.382 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:13.382 Message: lib/log: Defining dependency "log" 00:02:13.382 Message: lib/kvargs: Defining dependency "kvargs" 00:02:13.382 Message: lib/telemetry: Defining dependency "telemetry" 00:02:13.382 Checking for function "getentropy" : NO 00:02:13.382 Message: lib/eal: Defining dependency "eal" 00:02:13.382 Message: lib/ring: Defining dependency "ring" 00:02:13.382 Message: lib/rcu: Defining dependency "rcu" 00:02:13.382 Message: lib/mempool: Defining dependency "mempool" 00:02:13.382 Message: lib/mbuf: Defining dependency "mbuf" 00:02:13.382 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:13.382 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:13.382 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:13.382 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:13.382 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:13.382 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:13.382 Compiler for C supports arguments -mpclmul: YES 00:02:13.382 Compiler for C supports arguments -maes: YES 00:02:13.382 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:13.382 Compiler for C supports arguments -mavx512bw: YES 00:02:13.382 Compiler for C supports arguments -mavx512dq: YES 00:02:13.382 Compiler for C supports arguments -mavx512vl: YES 00:02:13.382 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:13.382 Compiler for C supports arguments -mavx2: YES 00:02:13.382 Compiler for C supports arguments -mavx: YES 00:02:13.382 Message: lib/net: Defining dependency "net" 00:02:13.382 Message: lib/meter: Defining dependency "meter" 00:02:13.382 Message: lib/ethdev: Defining dependency "ethdev" 00:02:13.382 Message: lib/pci: Defining dependency "pci" 00:02:13.382 Message: lib/cmdline: Defining dependency "cmdline" 00:02:13.382 Message: lib/hash: Defining dependency "hash" 00:02:13.382 Message: lib/timer: Defining dependency "timer" 00:02:13.382 Message: lib/compressdev: Defining dependency "compressdev" 00:02:13.382 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:13.382 Message: lib/dmadev: Defining dependency "dmadev" 00:02:13.382 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:13.382 Message: lib/power: Defining dependency "power" 00:02:13.382 Message: lib/reorder: Defining dependency "reorder" 00:02:13.382 Message: lib/security: Defining dependency "security" 00:02:13.382 Has header "linux/userfaultfd.h" : YES 00:02:13.382 Has header "linux/vduse.h" : YES 00:02:13.382 Message: lib/vhost: Defining dependency "vhost" 00:02:13.382 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:13.382 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:13.382 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:13.382 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:13.382 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:13.382 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:13.382 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:13.382 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:13.382 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:13.382 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:13.382 Program doxygen found: YES (/usr/bin/doxygen) 00:02:13.382 Configuring doxy-api-html.conf using configuration 00:02:13.382 Configuring doxy-api-man.conf using configuration 00:02:13.382 Program mandb found: YES (/usr/bin/mandb) 00:02:13.382 Program sphinx-build found: NO 00:02:13.382 Configuring rte_build_config.h using configuration 00:02:13.382 Message: 00:02:13.382 ================= 00:02:13.382 Applications Enabled 00:02:13.382 ================= 00:02:13.382 00:02:13.382 apps: 00:02:13.382 00:02:13.382 00:02:13.382 Message: 00:02:13.382 ================= 00:02:13.382 Libraries Enabled 00:02:13.382 ================= 00:02:13.382 00:02:13.382 libs: 00:02:13.382 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:13.382 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:13.382 cryptodev, dmadev, power, reorder, security, vhost, 00:02:13.382 00:02:13.382 Message: 00:02:13.382 =============== 00:02:13.382 Drivers Enabled 00:02:13.382 =============== 00:02:13.382 00:02:13.382 common: 00:02:13.382 00:02:13.382 bus: 00:02:13.382 pci, vdev, 00:02:13.382 mempool: 00:02:13.382 ring, 00:02:13.382 dma: 00:02:13.382 00:02:13.382 net: 00:02:13.382 00:02:13.382 crypto: 00:02:13.382 00:02:13.382 compress: 00:02:13.382 00:02:13.382 vdpa: 00:02:13.382 00:02:13.382 00:02:13.382 Message: 00:02:13.382 ================= 00:02:13.382 Content Skipped 00:02:13.382 ================= 00:02:13.382 00:02:13.382 apps: 00:02:13.383 dumpcap: explicitly disabled via build config 00:02:13.383 graph: explicitly disabled via build config 00:02:13.383 pdump: explicitly disabled via build config 00:02:13.383 proc-info: explicitly disabled via build config 00:02:13.383 test-acl: explicitly disabled via build config 00:02:13.383 test-bbdev: explicitly disabled via build config 00:02:13.383 test-cmdline: explicitly disabled via build config 00:02:13.383 test-compress-perf: explicitly disabled via build config 00:02:13.383 test-crypto-perf: explicitly disabled via build config 00:02:13.383 test-dma-perf: explicitly disabled via build config 00:02:13.383 test-eventdev: explicitly disabled via build config 00:02:13.383 test-fib: explicitly disabled via build config 00:02:13.383 test-flow-perf: explicitly disabled via build config 00:02:13.383 test-gpudev: explicitly disabled via build config 00:02:13.383 test-mldev: explicitly disabled via build config 00:02:13.383 test-pipeline: explicitly disabled via build config 00:02:13.383 test-pmd: explicitly disabled via build config 00:02:13.383 test-regex: explicitly disabled via build config 00:02:13.383 test-sad: explicitly disabled via build config 00:02:13.383 test-security-perf: explicitly disabled via build config 00:02:13.383 00:02:13.383 libs: 00:02:13.383 argparse: explicitly disabled via build config 00:02:13.383 metrics: explicitly disabled via build config 00:02:13.383 acl: explicitly disabled via build config 00:02:13.383 bbdev: explicitly disabled via build config 00:02:13.383 bitratestats: explicitly disabled via build config 00:02:13.383 bpf: explicitly disabled via build config 00:02:13.383 cfgfile: explicitly disabled via build config 00:02:13.383 distributor: explicitly disabled via build config 00:02:13.383 efd: explicitly disabled via build config 00:02:13.383 eventdev: explicitly disabled via build config 00:02:13.383 dispatcher: explicitly disabled via build config 00:02:13.383 gpudev: explicitly disabled via build config 00:02:13.383 gro: explicitly disabled via build config 00:02:13.383 gso: explicitly disabled via build config 00:02:13.383 ip_frag: explicitly disabled via build config 00:02:13.383 jobstats: explicitly disabled via build config 00:02:13.383 latencystats: explicitly disabled via build config 00:02:13.383 lpm: explicitly disabled via build config 00:02:13.383 member: explicitly disabled via build config 00:02:13.383 pcapng: explicitly disabled via build config 00:02:13.383 rawdev: explicitly disabled via build config 00:02:13.383 regexdev: explicitly disabled via build config 00:02:13.383 mldev: explicitly disabled via build config 00:02:13.383 rib: explicitly disabled via build config 00:02:13.383 sched: explicitly disabled via build config 00:02:13.383 stack: explicitly disabled via build config 00:02:13.383 ipsec: explicitly disabled via build config 00:02:13.383 pdcp: explicitly disabled via build config 00:02:13.383 fib: explicitly disabled via build config 00:02:13.383 port: explicitly disabled via build config 00:02:13.383 pdump: explicitly disabled via build config 00:02:13.383 table: explicitly disabled via build config 00:02:13.383 pipeline: explicitly disabled via build config 00:02:13.383 graph: explicitly disabled via build config 00:02:13.383 node: explicitly disabled via build config 00:02:13.383 00:02:13.383 drivers: 00:02:13.383 common/cpt: not in enabled drivers build config 00:02:13.383 common/dpaax: not in enabled drivers build config 00:02:13.383 common/iavf: not in enabled drivers build config 00:02:13.383 common/idpf: not in enabled drivers build config 00:02:13.383 common/ionic: not in enabled drivers build config 00:02:13.383 common/mvep: not in enabled drivers build config 00:02:13.383 common/octeontx: not in enabled drivers build config 00:02:13.383 bus/auxiliary: not in enabled drivers build config 00:02:13.383 bus/cdx: not in enabled drivers build config 00:02:13.383 bus/dpaa: not in enabled drivers build config 00:02:13.383 bus/fslmc: not in enabled drivers build config 00:02:13.383 bus/ifpga: not in enabled drivers build config 00:02:13.383 bus/platform: not in enabled drivers build config 00:02:13.383 bus/uacce: not in enabled drivers build config 00:02:13.383 bus/vmbus: not in enabled drivers build config 00:02:13.383 common/cnxk: not in enabled drivers build config 00:02:13.383 common/mlx5: not in enabled drivers build config 00:02:13.383 common/nfp: not in enabled drivers build config 00:02:13.383 common/nitrox: not in enabled drivers build config 00:02:13.383 common/qat: not in enabled drivers build config 00:02:13.383 common/sfc_efx: not in enabled drivers build config 00:02:13.383 mempool/bucket: not in enabled drivers build config 00:02:13.383 mempool/cnxk: not in enabled drivers build config 00:02:13.383 mempool/dpaa: not in enabled drivers build config 00:02:13.383 mempool/dpaa2: not in enabled drivers build config 00:02:13.383 mempool/octeontx: not in enabled drivers build config 00:02:13.383 mempool/stack: not in enabled drivers build config 00:02:13.383 dma/cnxk: not in enabled drivers build config 00:02:13.383 dma/dpaa: not in enabled drivers build config 00:02:13.383 dma/dpaa2: not in enabled drivers build config 00:02:13.383 dma/hisilicon: not in enabled drivers build config 00:02:13.383 dma/idxd: not in enabled drivers build config 00:02:13.383 dma/ioat: not in enabled drivers build config 00:02:13.383 dma/skeleton: not in enabled drivers build config 00:02:13.383 net/af_packet: not in enabled drivers build config 00:02:13.383 net/af_xdp: not in enabled drivers build config 00:02:13.383 net/ark: not in enabled drivers build config 00:02:13.383 net/atlantic: not in enabled drivers build config 00:02:13.383 net/avp: not in enabled drivers build config 00:02:13.383 net/axgbe: not in enabled drivers build config 00:02:13.383 net/bnx2x: not in enabled drivers build config 00:02:13.383 net/bnxt: not in enabled drivers build config 00:02:13.383 net/bonding: not in enabled drivers build config 00:02:13.383 net/cnxk: not in enabled drivers build config 00:02:13.383 net/cpfl: not in enabled drivers build config 00:02:13.383 net/cxgbe: not in enabled drivers build config 00:02:13.383 net/dpaa: not in enabled drivers build config 00:02:13.383 net/dpaa2: not in enabled drivers build config 00:02:13.383 net/e1000: not in enabled drivers build config 00:02:13.383 net/ena: not in enabled drivers build config 00:02:13.383 net/enetc: not in enabled drivers build config 00:02:13.383 net/enetfec: not in enabled drivers build config 00:02:13.383 net/enic: not in enabled drivers build config 00:02:13.383 net/failsafe: not in enabled drivers build config 00:02:13.383 net/fm10k: not in enabled drivers build config 00:02:13.383 net/gve: not in enabled drivers build config 00:02:13.383 net/hinic: not in enabled drivers build config 00:02:13.383 net/hns3: not in enabled drivers build config 00:02:13.383 net/i40e: not in enabled drivers build config 00:02:13.383 net/iavf: not in enabled drivers build config 00:02:13.383 net/ice: not in enabled drivers build config 00:02:13.383 net/idpf: not in enabled drivers build config 00:02:13.383 net/igc: not in enabled drivers build config 00:02:13.383 net/ionic: not in enabled drivers build config 00:02:13.383 net/ipn3ke: not in enabled drivers build config 00:02:13.383 net/ixgbe: not in enabled drivers build config 00:02:13.383 net/mana: not in enabled drivers build config 00:02:13.383 net/memif: not in enabled drivers build config 00:02:13.383 net/mlx4: not in enabled drivers build config 00:02:13.383 net/mlx5: not in enabled drivers build config 00:02:13.383 net/mvneta: not in enabled drivers build config 00:02:13.383 net/mvpp2: not in enabled drivers build config 00:02:13.383 net/netvsc: not in enabled drivers build config 00:02:13.383 net/nfb: not in enabled drivers build config 00:02:13.383 net/nfp: not in enabled drivers build config 00:02:13.383 net/ngbe: not in enabled drivers build config 00:02:13.383 net/null: not in enabled drivers build config 00:02:13.383 net/octeontx: not in enabled drivers build config 00:02:13.383 net/octeon_ep: not in enabled drivers build config 00:02:13.383 net/pcap: not in enabled drivers build config 00:02:13.383 net/pfe: not in enabled drivers build config 00:02:13.383 net/qede: not in enabled drivers build config 00:02:13.383 net/ring: not in enabled drivers build config 00:02:13.383 net/sfc: not in enabled drivers build config 00:02:13.383 net/softnic: not in enabled drivers build config 00:02:13.383 net/tap: not in enabled drivers build config 00:02:13.383 net/thunderx: not in enabled drivers build config 00:02:13.383 net/txgbe: not in enabled drivers build config 00:02:13.383 net/vdev_netvsc: not in enabled drivers build config 00:02:13.383 net/vhost: not in enabled drivers build config 00:02:13.383 net/virtio: not in enabled drivers build config 00:02:13.383 net/vmxnet3: not in enabled drivers build config 00:02:13.383 raw/*: missing internal dependency, "rawdev" 00:02:13.383 crypto/armv8: not in enabled drivers build config 00:02:13.383 crypto/bcmfs: not in enabled drivers build config 00:02:13.383 crypto/caam_jr: not in enabled drivers build config 00:02:13.383 crypto/ccp: not in enabled drivers build config 00:02:13.383 crypto/cnxk: not in enabled drivers build config 00:02:13.383 crypto/dpaa_sec: not in enabled drivers build config 00:02:13.383 crypto/dpaa2_sec: not in enabled drivers build config 00:02:13.383 crypto/ipsec_mb: not in enabled drivers build config 00:02:13.383 crypto/mlx5: not in enabled drivers build config 00:02:13.383 crypto/mvsam: not in enabled drivers build config 00:02:13.383 crypto/nitrox: not in enabled drivers build config 00:02:13.383 crypto/null: not in enabled drivers build config 00:02:13.383 crypto/octeontx: not in enabled drivers build config 00:02:13.383 crypto/openssl: not in enabled drivers build config 00:02:13.383 crypto/scheduler: not in enabled drivers build config 00:02:13.383 crypto/uadk: not in enabled drivers build config 00:02:13.383 crypto/virtio: not in enabled drivers build config 00:02:13.383 compress/isal: not in enabled drivers build config 00:02:13.383 compress/mlx5: not in enabled drivers build config 00:02:13.383 compress/nitrox: not in enabled drivers build config 00:02:13.383 compress/octeontx: not in enabled drivers build config 00:02:13.383 compress/zlib: not in enabled drivers build config 00:02:13.383 regex/*: missing internal dependency, "regexdev" 00:02:13.383 ml/*: missing internal dependency, "mldev" 00:02:13.383 vdpa/ifc: not in enabled drivers build config 00:02:13.383 vdpa/mlx5: not in enabled drivers build config 00:02:13.383 vdpa/nfp: not in enabled drivers build config 00:02:13.383 vdpa/sfc: not in enabled drivers build config 00:02:13.383 event/*: missing internal dependency, "eventdev" 00:02:13.383 baseband/*: missing internal dependency, "bbdev" 00:02:13.383 gpu/*: missing internal dependency, "gpudev" 00:02:13.383 00:02:13.383 00:02:13.642 Build targets in project: 85 00:02:13.642 00:02:13.642 DPDK 24.03.0 00:02:13.642 00:02:13.642 User defined options 00:02:13.642 buildtype : debug 00:02:13.642 default_library : shared 00:02:13.642 libdir : lib 00:02:13.642 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:13.642 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:13.642 c_link_args : 00:02:13.642 cpu_instruction_set: native 00:02:13.642 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:13.642 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:13.642 enable_docs : false 00:02:13.642 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:13.642 enable_kmods : false 00:02:13.642 max_lcores : 128 00:02:13.642 tests : false 00:02:13.642 00:02:13.642 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:14.209 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:14.209 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:14.209 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:14.209 [3/268] Linking static target lib/librte_log.a 00:02:14.209 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:14.209 [5/268] Linking static target lib/librte_kvargs.a 00:02:14.209 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:14.562 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:14.821 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:14.821 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:14.821 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:14.821 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:14.821 [12/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.821 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:14.821 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:14.821 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:14.821 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:14.821 [17/268] Linking static target lib/librte_telemetry.a 00:02:14.821 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:15.080 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:15.338 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:15.338 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:15.338 [22/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.338 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:15.338 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:15.338 [25/268] Linking target lib/librte_log.so.24.1 00:02:15.338 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:15.338 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:15.338 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:15.597 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:15.597 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:15.597 [31/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:15.597 [32/268] Linking target lib/librte_kvargs.so.24.1 00:02:15.597 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.856 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:15.856 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:15.856 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:15.856 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:15.856 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:15.856 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:16.115 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:16.115 [41/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:16.115 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:16.116 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:16.116 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:16.116 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:16.116 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:16.116 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:16.116 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:16.375 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:16.375 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:16.375 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:16.635 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:16.635 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:16.635 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:16.635 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:16.635 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:16.635 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:16.635 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:16.635 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:16.635 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:16.894 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:16.894 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:16.894 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:17.153 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:17.153 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:17.153 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:17.153 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:17.153 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:17.153 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:17.412 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:17.412 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:17.412 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:17.671 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:17.671 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:17.671 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:17.671 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:17.671 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:17.671 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:17.671 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:17.671 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:17.931 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:17.931 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:17.931 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:17.931 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:17.931 [85/268] Linking static target lib/librte_ring.a 00:02:18.190 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:18.190 [87/268] Linking static target lib/librte_eal.a 00:02:18.190 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:18.190 [89/268] Linking static target lib/librte_rcu.a 00:02:18.190 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:18.447 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:18.447 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:18.447 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:18.447 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:18.447 [95/268] Linking static target lib/librte_mempool.a 00:02:18.447 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:18.447 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.705 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.705 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:18.705 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:18.705 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:18.705 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:18.705 [103/268] Linking static target lib/librte_mbuf.a 00:02:18.963 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:18.963 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:18.963 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:19.221 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:19.221 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:19.221 [109/268] Linking static target lib/librte_net.a 00:02:19.221 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:19.221 [111/268] Linking static target lib/librte_meter.a 00:02:19.221 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:19.221 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:19.221 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:19.507 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.507 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:19.764 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.764 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.764 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.764 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:19.764 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:19.764 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:20.022 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:20.022 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:20.279 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:20.279 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:20.279 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:20.279 [128/268] Linking static target lib/librte_pci.a 00:02:20.279 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:20.279 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:20.279 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:20.538 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:20.538 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:20.538 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:20.538 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:20.538 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:20.538 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:20.538 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:20.538 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.538 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:20.538 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:20.538 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:20.796 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:20.796 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:20.796 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:20.796 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:20.796 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:20.796 [148/268] Linking static target lib/librte_cmdline.a 00:02:21.055 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:21.055 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:21.055 [151/268] Linking static target lib/librte_ethdev.a 00:02:21.055 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:21.055 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:21.055 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:21.055 [155/268] Linking static target lib/librte_timer.a 00:02:21.314 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:21.314 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:21.314 [158/268] Linking static target lib/librte_hash.a 00:02:21.314 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:21.314 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:21.314 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:21.314 [162/268] Linking static target lib/librte_compressdev.a 00:02:21.573 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:21.573 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:21.573 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.832 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:21.832 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:21.832 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:21.832 [169/268] Linking static target lib/librte_dmadev.a 00:02:22.091 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:22.091 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:22.091 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:22.091 [173/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:22.091 [174/268] Linking static target lib/librte_cryptodev.a 00:02:22.091 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:22.349 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.349 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:22.349 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.608 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:22.608 [180/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.608 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:22.608 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:22.608 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.608 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:22.867 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:22.867 [186/268] Linking static target lib/librte_power.a 00:02:22.867 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:22.867 [188/268] Linking static target lib/librte_reorder.a 00:02:23.125 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:23.125 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:23.125 [191/268] Linking static target lib/librte_security.a 00:02:23.125 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:23.125 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:23.383 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:23.383 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.641 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.641 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:23.641 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:23.641 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:23.900 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:23.900 [201/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.900 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:24.157 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:24.157 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:24.157 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:24.157 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:24.157 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.157 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:24.157 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:24.157 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:24.414 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:24.414 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:24.414 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:24.414 [214/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:24.414 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:24.414 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:24.414 [217/268] Linking static target drivers/librte_bus_vdev.a 00:02:24.414 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:24.414 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:24.414 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:24.414 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:24.670 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:24.670 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:24.670 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.670 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.670 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.670 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:25.236 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.494 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:25.494 [230/268] Linking static target lib/librte_vhost.a 00:02:28.037 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.568 [232/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.827 [233/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.827 [234/268] Linking target lib/librte_eal.so.24.1 00:02:30.827 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:30.827 [236/268] Linking target lib/librte_pci.so.24.1 00:02:30.827 [237/268] Linking target lib/librte_meter.so.24.1 00:02:30.827 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:30.827 [239/268] Linking target lib/librte_ring.so.24.1 00:02:31.086 [240/268] Linking target lib/librte_timer.so.24.1 00:02:31.086 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:31.086 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:31.086 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:31.086 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:31.086 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:31.086 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:31.086 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:31.086 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:31.086 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:31.345 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:31.345 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:31.345 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:31.345 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:31.629 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:31.629 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:31.629 [256/268] Linking target lib/librte_net.so.24.1 00:02:31.629 [257/268] Linking target lib/librte_compressdev.so.24.1 00:02:31.629 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:31.629 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:31.629 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:31.889 [261/268] Linking target lib/librte_hash.so.24.1 00:02:31.889 [262/268] Linking target lib/librte_security.so.24.1 00:02:31.889 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:31.889 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:31.889 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:31.889 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:32.148 [267/268] Linking target lib/librte_power.so.24.1 00:02:32.148 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:32.148 INFO: autodetecting backend as ninja 00:02:32.148 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:33.524 CC lib/ut/ut.o 00:02:33.524 CC lib/ut_mock/mock.o 00:02:33.524 CC lib/log/log.o 00:02:33.524 CC lib/log/log_flags.o 00:02:33.524 CC lib/log/log_deprecated.o 00:02:33.524 LIB libspdk_ut.a 00:02:33.524 LIB libspdk_log.a 00:02:33.524 LIB libspdk_ut_mock.a 00:02:33.524 SO libspdk_ut.so.2.0 00:02:33.524 SO libspdk_log.so.7.0 00:02:33.524 SO libspdk_ut_mock.so.6.0 00:02:33.524 SYMLINK libspdk_ut.so 00:02:33.524 SYMLINK libspdk_ut_mock.so 00:02:33.524 SYMLINK libspdk_log.so 00:02:33.783 CC lib/dma/dma.o 00:02:33.783 CC lib/util/base64.o 00:02:33.783 CC lib/util/bit_array.o 00:02:33.783 CXX lib/trace_parser/trace.o 00:02:33.783 CC lib/util/cpuset.o 00:02:33.783 CC lib/util/crc32.o 00:02:33.783 CC lib/util/crc16.o 00:02:33.783 CC lib/util/crc32c.o 00:02:33.783 CC lib/ioat/ioat.o 00:02:34.042 CC lib/vfio_user/host/vfio_user_pci.o 00:02:34.042 CC lib/util/crc32_ieee.o 00:02:34.042 CC lib/util/crc64.o 00:02:34.042 CC lib/vfio_user/host/vfio_user.o 00:02:34.042 CC lib/util/dif.o 00:02:34.042 CC lib/util/fd.o 00:02:34.042 LIB libspdk_dma.a 00:02:34.042 SO libspdk_dma.so.4.0 00:02:34.042 LIB libspdk_ioat.a 00:02:34.042 CC lib/util/fd_group.o 00:02:34.042 CC lib/util/file.o 00:02:34.042 CC lib/util/hexlify.o 00:02:34.301 SO libspdk_ioat.so.7.0 00:02:34.301 SYMLINK libspdk_dma.so 00:02:34.301 CC lib/util/iov.o 00:02:34.301 LIB libspdk_vfio_user.a 00:02:34.301 CC lib/util/math.o 00:02:34.301 SYMLINK libspdk_ioat.so 00:02:34.301 CC lib/util/net.o 00:02:34.301 CC lib/util/pipe.o 00:02:34.301 SO libspdk_vfio_user.so.5.0 00:02:34.301 CC lib/util/string.o 00:02:34.301 CC lib/util/strerror_tls.o 00:02:34.301 SYMLINK libspdk_vfio_user.so 00:02:34.301 CC lib/util/uuid.o 00:02:34.301 CC lib/util/xor.o 00:02:34.301 CC lib/util/zipf.o 00:02:34.560 LIB libspdk_util.a 00:02:34.560 SO libspdk_util.so.9.1 00:02:34.820 LIB libspdk_trace_parser.a 00:02:34.820 SO libspdk_trace_parser.so.5.0 00:02:34.820 SYMLINK libspdk_util.so 00:02:35.080 SYMLINK libspdk_trace_parser.so 00:02:35.080 CC lib/conf/conf.o 00:02:35.080 CC lib/json/json_parse.o 00:02:35.080 CC lib/vmd/vmd.o 00:02:35.080 CC lib/json/json_write.o 00:02:35.080 CC lib/json/json_util.o 00:02:35.080 CC lib/vmd/led.o 00:02:35.080 CC lib/env_dpdk/env.o 00:02:35.080 CC lib/rdma_provider/common.o 00:02:35.080 CC lib/idxd/idxd.o 00:02:35.080 CC lib/rdma_utils/rdma_utils.o 00:02:35.080 CC lib/idxd/idxd_user.o 00:02:35.339 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:35.339 LIB libspdk_conf.a 00:02:35.339 CC lib/idxd/idxd_kernel.o 00:02:35.339 CC lib/env_dpdk/memory.o 00:02:35.339 SO libspdk_conf.so.6.0 00:02:35.339 LIB libspdk_json.a 00:02:35.339 LIB libspdk_rdma_utils.a 00:02:35.339 SO libspdk_json.so.6.0 00:02:35.339 SYMLINK libspdk_conf.so 00:02:35.339 CC lib/env_dpdk/pci.o 00:02:35.339 SO libspdk_rdma_utils.so.1.0 00:02:35.339 LIB libspdk_rdma_provider.a 00:02:35.339 SYMLINK libspdk_json.so 00:02:35.339 CC lib/env_dpdk/init.o 00:02:35.339 SYMLINK libspdk_rdma_utils.so 00:02:35.339 CC lib/env_dpdk/threads.o 00:02:35.339 CC lib/env_dpdk/pci_ioat.o 00:02:35.339 SO libspdk_rdma_provider.so.6.0 00:02:35.598 SYMLINK libspdk_rdma_provider.so 00:02:35.598 CC lib/env_dpdk/pci_virtio.o 00:02:35.598 LIB libspdk_idxd.a 00:02:35.598 CC lib/env_dpdk/pci_vmd.o 00:02:35.598 CC lib/env_dpdk/pci_idxd.o 00:02:35.598 SO libspdk_idxd.so.12.0 00:02:35.598 CC lib/jsonrpc/jsonrpc_server.o 00:02:35.598 LIB libspdk_vmd.a 00:02:35.598 SO libspdk_vmd.so.6.0 00:02:35.598 CC lib/env_dpdk/pci_event.o 00:02:35.598 SYMLINK libspdk_idxd.so 00:02:35.598 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:35.598 CC lib/env_dpdk/sigbus_handler.o 00:02:35.598 CC lib/env_dpdk/pci_dpdk.o 00:02:35.598 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:35.598 SYMLINK libspdk_vmd.so 00:02:35.857 CC lib/jsonrpc/jsonrpc_client.o 00:02:35.857 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:35.857 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:36.116 LIB libspdk_jsonrpc.a 00:02:36.116 SO libspdk_jsonrpc.so.6.0 00:02:36.116 SYMLINK libspdk_jsonrpc.so 00:02:36.375 LIB libspdk_env_dpdk.a 00:02:36.634 SO libspdk_env_dpdk.so.15.0 00:02:36.634 CC lib/rpc/rpc.o 00:02:36.634 SYMLINK libspdk_env_dpdk.so 00:02:36.893 LIB libspdk_rpc.a 00:02:36.893 SO libspdk_rpc.so.6.0 00:02:36.893 SYMLINK libspdk_rpc.so 00:02:37.152 CC lib/notify/notify.o 00:02:37.152 CC lib/notify/notify_rpc.o 00:02:37.152 CC lib/keyring/keyring.o 00:02:37.152 CC lib/keyring/keyring_rpc.o 00:02:37.443 CC lib/trace/trace.o 00:02:37.443 CC lib/trace/trace_rpc.o 00:02:37.443 CC lib/trace/trace_flags.o 00:02:37.443 LIB libspdk_notify.a 00:02:37.443 LIB libspdk_keyring.a 00:02:37.443 SO libspdk_notify.so.6.0 00:02:37.443 LIB libspdk_trace.a 00:02:37.443 SO libspdk_keyring.so.1.0 00:02:37.443 SYMLINK libspdk_notify.so 00:02:37.701 SO libspdk_trace.so.10.0 00:02:37.701 SYMLINK libspdk_keyring.so 00:02:37.701 SYMLINK libspdk_trace.so 00:02:37.960 CC lib/sock/sock_rpc.o 00:02:37.960 CC lib/sock/sock.o 00:02:38.219 CC lib/thread/thread.o 00:02:38.219 CC lib/thread/iobuf.o 00:02:38.478 LIB libspdk_sock.a 00:02:38.478 SO libspdk_sock.so.10.0 00:02:38.478 SYMLINK libspdk_sock.so 00:02:39.045 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:39.045 CC lib/nvme/nvme_ctrlr.o 00:02:39.045 CC lib/nvme/nvme_fabric.o 00:02:39.045 CC lib/nvme/nvme_ns_cmd.o 00:02:39.045 CC lib/nvme/nvme_ns.o 00:02:39.045 CC lib/nvme/nvme_pcie_common.o 00:02:39.045 CC lib/nvme/nvme_pcie.o 00:02:39.045 CC lib/nvme/nvme_qpair.o 00:02:39.045 CC lib/nvme/nvme.o 00:02:39.304 LIB libspdk_thread.a 00:02:39.562 SO libspdk_thread.so.10.1 00:02:39.562 SYMLINK libspdk_thread.so 00:02:39.562 CC lib/nvme/nvme_quirks.o 00:02:39.562 CC lib/nvme/nvme_transport.o 00:02:39.562 CC lib/nvme/nvme_discovery.o 00:02:39.828 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:39.828 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:39.828 CC lib/nvme/nvme_tcp.o 00:02:39.828 CC lib/nvme/nvme_opal.o 00:02:40.089 CC lib/nvme/nvme_io_msg.o 00:02:40.089 CC lib/accel/accel.o 00:02:40.089 CC lib/nvme/nvme_poll_group.o 00:02:40.089 CC lib/nvme/nvme_zns.o 00:02:40.346 CC lib/nvme/nvme_stubs.o 00:02:40.346 CC lib/nvme/nvme_auth.o 00:02:40.346 CC lib/nvme/nvme_cuse.o 00:02:40.346 CC lib/nvme/nvme_rdma.o 00:02:40.605 CC lib/accel/accel_rpc.o 00:02:40.863 CC lib/accel/accel_sw.o 00:02:40.863 CC lib/blob/blobstore.o 00:02:40.863 CC lib/init/json_config.o 00:02:40.863 CC lib/blob/request.o 00:02:40.863 CC lib/virtio/virtio.o 00:02:40.863 LIB libspdk_accel.a 00:02:41.122 CC lib/init/subsystem.o 00:02:41.122 SO libspdk_accel.so.15.1 00:02:41.122 CC lib/virtio/virtio_vhost_user.o 00:02:41.122 CC lib/blob/zeroes.o 00:02:41.122 CC lib/blob/blob_bs_dev.o 00:02:41.122 CC lib/virtio/virtio_vfio_user.o 00:02:41.122 CC lib/virtio/virtio_pci.o 00:02:41.122 SYMLINK libspdk_accel.so 00:02:41.122 CC lib/init/subsystem_rpc.o 00:02:41.122 CC lib/init/rpc.o 00:02:41.379 CC lib/bdev/bdev.o 00:02:41.379 CC lib/bdev/bdev_zone.o 00:02:41.379 LIB libspdk_init.a 00:02:41.379 CC lib/bdev/part.o 00:02:41.379 CC lib/bdev/bdev_rpc.o 00:02:41.379 CC lib/bdev/scsi_nvme.o 00:02:41.379 LIB libspdk_virtio.a 00:02:41.379 SO libspdk_init.so.5.0 00:02:41.379 SO libspdk_virtio.so.7.0 00:02:41.379 SYMLINK libspdk_init.so 00:02:41.379 LIB libspdk_nvme.a 00:02:41.379 SYMLINK libspdk_virtio.so 00:02:41.638 SO libspdk_nvme.so.13.1 00:02:41.638 CC lib/event/app.o 00:02:41.638 CC lib/event/reactor.o 00:02:41.638 CC lib/event/log_rpc.o 00:02:41.638 CC lib/event/app_rpc.o 00:02:41.638 CC lib/event/scheduler_static.o 00:02:41.896 SYMLINK libspdk_nvme.so 00:02:42.155 LIB libspdk_event.a 00:02:42.155 SO libspdk_event.so.14.0 00:02:42.414 SYMLINK libspdk_event.so 00:02:43.352 LIB libspdk_blob.a 00:02:43.352 SO libspdk_blob.so.11.0 00:02:43.611 SYMLINK libspdk_blob.so 00:02:43.611 LIB libspdk_bdev.a 00:02:43.611 SO libspdk_bdev.so.15.1 00:02:43.870 SYMLINK libspdk_bdev.so 00:02:43.870 CC lib/lvol/lvol.o 00:02:43.870 CC lib/blobfs/blobfs.o 00:02:43.870 CC lib/blobfs/tree.o 00:02:43.870 CC lib/scsi/dev.o 00:02:43.870 CC lib/scsi/lun.o 00:02:43.870 CC lib/scsi/port.o 00:02:43.870 CC lib/ublk/ublk.o 00:02:43.870 CC lib/ftl/ftl_core.o 00:02:43.870 CC lib/nbd/nbd.o 00:02:43.870 CC lib/nvmf/ctrlr.o 00:02:43.870 CC lib/nbd/nbd_rpc.o 00:02:44.130 CC lib/scsi/scsi.o 00:02:44.130 CC lib/ublk/ublk_rpc.o 00:02:44.130 CC lib/nvmf/ctrlr_discovery.o 00:02:44.130 CC lib/ftl/ftl_init.o 00:02:44.130 CC lib/scsi/scsi_bdev.o 00:02:44.389 CC lib/ftl/ftl_layout.o 00:02:44.389 LIB libspdk_nbd.a 00:02:44.389 CC lib/scsi/scsi_pr.o 00:02:44.389 SO libspdk_nbd.so.7.0 00:02:44.389 SYMLINK libspdk_nbd.so 00:02:44.389 CC lib/scsi/scsi_rpc.o 00:02:44.389 CC lib/ftl/ftl_debug.o 00:02:44.389 LIB libspdk_ublk.a 00:02:44.662 LIB libspdk_blobfs.a 00:02:44.662 SO libspdk_ublk.so.3.0 00:02:44.662 CC lib/ftl/ftl_io.o 00:02:44.662 SO libspdk_blobfs.so.10.0 00:02:44.662 CC lib/scsi/task.o 00:02:44.662 SYMLINK libspdk_ublk.so 00:02:44.662 CC lib/nvmf/ctrlr_bdev.o 00:02:44.662 CC lib/nvmf/subsystem.o 00:02:44.662 CC lib/nvmf/nvmf.o 00:02:44.662 CC lib/ftl/ftl_sb.o 00:02:44.662 SYMLINK libspdk_blobfs.so 00:02:44.662 CC lib/ftl/ftl_l2p.o 00:02:44.662 CC lib/ftl/ftl_l2p_flat.o 00:02:44.662 LIB libspdk_lvol.a 00:02:44.662 SO libspdk_lvol.so.10.0 00:02:44.662 SYMLINK libspdk_lvol.so 00:02:44.662 CC lib/ftl/ftl_nv_cache.o 00:02:44.920 CC lib/nvmf/nvmf_rpc.o 00:02:44.920 LIB libspdk_scsi.a 00:02:44.920 CC lib/nvmf/transport.o 00:02:44.920 CC lib/ftl/ftl_band.o 00:02:44.920 CC lib/nvmf/tcp.o 00:02:44.920 SO libspdk_scsi.so.9.0 00:02:44.920 SYMLINK libspdk_scsi.so 00:02:44.920 CC lib/nvmf/stubs.o 00:02:45.178 CC lib/nvmf/mdns_server.o 00:02:45.178 CC lib/nvmf/rdma.o 00:02:45.436 CC lib/nvmf/auth.o 00:02:45.436 CC lib/ftl/ftl_band_ops.o 00:02:45.436 CC lib/iscsi/conn.o 00:02:45.436 CC lib/iscsi/init_grp.o 00:02:45.693 CC lib/vhost/vhost.o 00:02:45.693 CC lib/vhost/vhost_rpc.o 00:02:45.693 CC lib/ftl/ftl_writer.o 00:02:45.693 CC lib/vhost/vhost_scsi.o 00:02:45.693 CC lib/ftl/ftl_rq.o 00:02:45.693 CC lib/iscsi/iscsi.o 00:02:45.951 CC lib/vhost/vhost_blk.o 00:02:45.951 CC lib/ftl/ftl_reloc.o 00:02:45.951 CC lib/vhost/rte_vhost_user.o 00:02:46.209 CC lib/iscsi/md5.o 00:02:46.209 CC lib/ftl/ftl_l2p_cache.o 00:02:46.209 CC lib/ftl/ftl_p2l.o 00:02:46.209 CC lib/iscsi/param.o 00:02:46.209 CC lib/iscsi/portal_grp.o 00:02:46.209 CC lib/ftl/mngt/ftl_mngt.o 00:02:46.467 CC lib/iscsi/tgt_node.o 00:02:46.467 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:46.467 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:46.467 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:46.467 CC lib/iscsi/iscsi_subsystem.o 00:02:46.725 CC lib/iscsi/iscsi_rpc.o 00:02:46.725 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:46.725 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:46.725 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:46.725 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:46.725 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:46.984 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:46.984 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:46.984 LIB libspdk_nvmf.a 00:02:46.984 CC lib/iscsi/task.o 00:02:46.984 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:46.984 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:46.984 CC lib/ftl/utils/ftl_conf.o 00:02:46.984 CC lib/ftl/utils/ftl_md.o 00:02:46.984 LIB libspdk_vhost.a 00:02:46.984 SO libspdk_nvmf.so.19.0 00:02:46.984 CC lib/ftl/utils/ftl_mempool.o 00:02:46.984 CC lib/ftl/utils/ftl_bitmap.o 00:02:46.984 SO libspdk_vhost.so.8.0 00:02:46.984 CC lib/ftl/utils/ftl_property.o 00:02:46.984 LIB libspdk_iscsi.a 00:02:47.241 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:47.241 SYMLINK libspdk_vhost.so 00:02:47.241 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:47.241 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:47.241 SO libspdk_iscsi.so.8.0 00:02:47.241 SYMLINK libspdk_nvmf.so 00:02:47.241 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:47.241 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:47.241 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:47.241 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:47.241 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:47.241 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:47.241 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:47.498 SYMLINK libspdk_iscsi.so 00:02:47.498 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:47.498 CC lib/ftl/base/ftl_base_dev.o 00:02:47.498 CC lib/ftl/base/ftl_base_bdev.o 00:02:47.498 CC lib/ftl/ftl_trace.o 00:02:47.756 LIB libspdk_ftl.a 00:02:48.014 SO libspdk_ftl.so.9.0 00:02:48.272 SYMLINK libspdk_ftl.so 00:02:48.839 CC module/env_dpdk/env_dpdk_rpc.o 00:02:48.839 CC module/accel/iaa/accel_iaa.o 00:02:48.839 CC module/accel/error/accel_error.o 00:02:48.839 CC module/sock/posix/posix.o 00:02:48.839 CC module/sock/uring/uring.o 00:02:48.839 CC module/accel/ioat/accel_ioat.o 00:02:48.839 CC module/blob/bdev/blob_bdev.o 00:02:48.839 CC module/keyring/file/keyring.o 00:02:48.839 CC module/accel/dsa/accel_dsa.o 00:02:48.839 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:48.839 LIB libspdk_env_dpdk_rpc.a 00:02:48.839 SO libspdk_env_dpdk_rpc.so.6.0 00:02:48.839 CC module/keyring/file/keyring_rpc.o 00:02:48.839 SYMLINK libspdk_env_dpdk_rpc.so 00:02:48.839 CC module/accel/error/accel_error_rpc.o 00:02:48.839 CC module/accel/iaa/accel_iaa_rpc.o 00:02:48.839 CC module/accel/ioat/accel_ioat_rpc.o 00:02:48.839 CC module/accel/dsa/accel_dsa_rpc.o 00:02:48.839 LIB libspdk_scheduler_dynamic.a 00:02:49.097 LIB libspdk_blob_bdev.a 00:02:49.097 SO libspdk_scheduler_dynamic.so.4.0 00:02:49.097 SO libspdk_blob_bdev.so.11.0 00:02:49.097 LIB libspdk_keyring_file.a 00:02:49.097 LIB libspdk_accel_iaa.a 00:02:49.097 LIB libspdk_accel_ioat.a 00:02:49.097 LIB libspdk_accel_error.a 00:02:49.097 LIB libspdk_accel_dsa.a 00:02:49.097 SO libspdk_keyring_file.so.1.0 00:02:49.097 SYMLINK libspdk_scheduler_dynamic.so 00:02:49.097 SYMLINK libspdk_blob_bdev.so 00:02:49.097 SO libspdk_accel_ioat.so.6.0 00:02:49.097 SO libspdk_accel_error.so.2.0 00:02:49.097 SO libspdk_accel_iaa.so.3.0 00:02:49.097 SO libspdk_accel_dsa.so.5.0 00:02:49.097 SYMLINK libspdk_keyring_file.so 00:02:49.097 SYMLINK libspdk_accel_iaa.so 00:02:49.097 SYMLINK libspdk_accel_ioat.so 00:02:49.097 SYMLINK libspdk_accel_error.so 00:02:49.097 SYMLINK libspdk_accel_dsa.so 00:02:49.097 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:49.356 CC module/scheduler/gscheduler/gscheduler.o 00:02:49.356 CC module/keyring/linux/keyring.o 00:02:49.356 LIB libspdk_scheduler_dpdk_governor.a 00:02:49.356 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:49.356 LIB libspdk_sock_uring.a 00:02:49.356 LIB libspdk_sock_posix.a 00:02:49.356 SO libspdk_sock_uring.so.5.0 00:02:49.356 CC module/bdev/error/vbdev_error.o 00:02:49.356 CC module/bdev/lvol/vbdev_lvol.o 00:02:49.356 CC module/bdev/gpt/gpt.o 00:02:49.356 CC module/bdev/delay/vbdev_delay.o 00:02:49.356 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:49.356 LIB libspdk_scheduler_gscheduler.a 00:02:49.356 CC module/blobfs/bdev/blobfs_bdev.o 00:02:49.356 SO libspdk_sock_posix.so.6.0 00:02:49.356 SO libspdk_scheduler_gscheduler.so.4.0 00:02:49.356 SYMLINK libspdk_sock_uring.so 00:02:49.356 CC module/bdev/error/vbdev_error_rpc.o 00:02:49.356 CC module/keyring/linux/keyring_rpc.o 00:02:49.615 SYMLINK libspdk_sock_posix.so 00:02:49.615 SYMLINK libspdk_scheduler_gscheduler.so 00:02:49.615 CC module/bdev/gpt/vbdev_gpt.o 00:02:49.615 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:49.615 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:49.615 LIB libspdk_keyring_linux.a 00:02:49.615 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:49.615 SO libspdk_keyring_linux.so.1.0 00:02:49.615 CC module/bdev/malloc/bdev_malloc.o 00:02:49.615 LIB libspdk_bdev_error.a 00:02:49.615 SO libspdk_bdev_error.so.6.0 00:02:49.615 SYMLINK libspdk_keyring_linux.so 00:02:49.615 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:49.615 LIB libspdk_bdev_delay.a 00:02:49.615 LIB libspdk_blobfs_bdev.a 00:02:49.872 SYMLINK libspdk_bdev_error.so 00:02:49.872 SO libspdk_bdev_delay.so.6.0 00:02:49.872 SO libspdk_blobfs_bdev.so.6.0 00:02:49.872 CC module/bdev/null/bdev_null.o 00:02:49.872 LIB libspdk_bdev_gpt.a 00:02:49.872 SYMLINK libspdk_bdev_delay.so 00:02:49.872 CC module/bdev/null/bdev_null_rpc.o 00:02:49.872 SYMLINK libspdk_blobfs_bdev.so 00:02:49.872 SO libspdk_bdev_gpt.so.6.0 00:02:49.872 CC module/bdev/nvme/bdev_nvme.o 00:02:49.872 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:49.872 SYMLINK libspdk_bdev_gpt.so 00:02:49.872 CC module/bdev/passthru/vbdev_passthru.o 00:02:49.872 LIB libspdk_bdev_malloc.a 00:02:49.872 LIB libspdk_bdev_lvol.a 00:02:49.872 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:50.130 CC module/bdev/split/vbdev_split.o 00:02:50.130 SO libspdk_bdev_malloc.so.6.0 00:02:50.130 CC module/bdev/raid/bdev_raid.o 00:02:50.130 SO libspdk_bdev_lvol.so.6.0 00:02:50.130 LIB libspdk_bdev_null.a 00:02:50.130 SO libspdk_bdev_null.so.6.0 00:02:50.130 SYMLINK libspdk_bdev_malloc.so 00:02:50.130 CC module/bdev/raid/bdev_raid_rpc.o 00:02:50.130 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:50.130 SYMLINK libspdk_bdev_lvol.so 00:02:50.130 CC module/bdev/raid/bdev_raid_sb.o 00:02:50.130 SYMLINK libspdk_bdev_null.so 00:02:50.130 CC module/bdev/raid/raid0.o 00:02:50.130 CC module/bdev/split/vbdev_split_rpc.o 00:02:50.130 LIB libspdk_bdev_passthru.a 00:02:50.130 CC module/bdev/nvme/nvme_rpc.o 00:02:50.130 SO libspdk_bdev_passthru.so.6.0 00:02:50.388 SYMLINK libspdk_bdev_passthru.so 00:02:50.388 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:50.388 LIB libspdk_bdev_split.a 00:02:50.388 SO libspdk_bdev_split.so.6.0 00:02:50.388 CC module/bdev/nvme/bdev_mdns_client.o 00:02:50.388 CC module/bdev/nvme/vbdev_opal.o 00:02:50.388 SYMLINK libspdk_bdev_split.so 00:02:50.388 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:50.388 CC module/bdev/raid/raid1.o 00:02:50.388 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:50.388 CC module/bdev/uring/bdev_uring.o 00:02:50.388 LIB libspdk_bdev_zone_block.a 00:02:50.388 SO libspdk_bdev_zone_block.so.6.0 00:02:50.679 CC module/bdev/uring/bdev_uring_rpc.o 00:02:50.679 SYMLINK libspdk_bdev_zone_block.so 00:02:50.679 CC module/bdev/raid/concat.o 00:02:50.679 CC module/bdev/aio/bdev_aio.o 00:02:50.679 CC module/bdev/aio/bdev_aio_rpc.o 00:02:50.679 LIB libspdk_bdev_uring.a 00:02:50.679 CC module/bdev/ftl/bdev_ftl.o 00:02:50.679 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:50.679 SO libspdk_bdev_uring.so.6.0 00:02:50.679 CC module/bdev/iscsi/bdev_iscsi.o 00:02:50.679 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:50.679 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:50.679 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:50.939 LIB libspdk_bdev_raid.a 00:02:50.939 SYMLINK libspdk_bdev_uring.so 00:02:50.939 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:50.939 LIB libspdk_bdev_aio.a 00:02:50.939 SO libspdk_bdev_raid.so.6.0 00:02:50.939 SO libspdk_bdev_aio.so.6.0 00:02:50.939 SYMLINK libspdk_bdev_aio.so 00:02:50.939 SYMLINK libspdk_bdev_raid.so 00:02:50.939 LIB libspdk_bdev_ftl.a 00:02:51.198 SO libspdk_bdev_ftl.so.6.0 00:02:51.198 LIB libspdk_bdev_iscsi.a 00:02:51.198 SYMLINK libspdk_bdev_ftl.so 00:02:51.198 SO libspdk_bdev_iscsi.so.6.0 00:02:51.198 SYMLINK libspdk_bdev_iscsi.so 00:02:51.198 LIB libspdk_bdev_virtio.a 00:02:51.457 SO libspdk_bdev_virtio.so.6.0 00:02:51.457 SYMLINK libspdk_bdev_virtio.so 00:02:52.027 LIB libspdk_bdev_nvme.a 00:02:52.027 SO libspdk_bdev_nvme.so.7.0 00:02:52.027 SYMLINK libspdk_bdev_nvme.so 00:02:52.602 CC module/event/subsystems/scheduler/scheduler.o 00:02:52.602 CC module/event/subsystems/keyring/keyring.o 00:02:52.860 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:52.860 CC module/event/subsystems/iobuf/iobuf.o 00:02:52.860 CC module/event/subsystems/vmd/vmd.o 00:02:52.860 CC module/event/subsystems/sock/sock.o 00:02:52.860 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:52.860 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:52.860 LIB libspdk_event_vmd.a 00:02:52.860 LIB libspdk_event_keyring.a 00:02:52.860 LIB libspdk_event_vhost_blk.a 00:02:52.860 LIB libspdk_event_scheduler.a 00:02:52.860 LIB libspdk_event_iobuf.a 00:02:52.860 LIB libspdk_event_sock.a 00:02:52.860 SO libspdk_event_vmd.so.6.0 00:02:52.860 SO libspdk_event_vhost_blk.so.3.0 00:02:52.860 SO libspdk_event_keyring.so.1.0 00:02:52.860 SO libspdk_event_scheduler.so.4.0 00:02:52.860 SO libspdk_event_iobuf.so.3.0 00:02:52.860 SO libspdk_event_sock.so.5.0 00:02:52.860 SYMLINK libspdk_event_vhost_blk.so 00:02:52.860 SYMLINK libspdk_event_vmd.so 00:02:52.860 SYMLINK libspdk_event_scheduler.so 00:02:52.860 SYMLINK libspdk_event_keyring.so 00:02:52.860 SYMLINK libspdk_event_iobuf.so 00:02:53.118 SYMLINK libspdk_event_sock.so 00:02:53.377 CC module/event/subsystems/accel/accel.o 00:02:53.635 LIB libspdk_event_accel.a 00:02:53.635 SO libspdk_event_accel.so.6.0 00:02:53.635 SYMLINK libspdk_event_accel.so 00:02:54.202 CC module/event/subsystems/bdev/bdev.o 00:02:54.202 LIB libspdk_event_bdev.a 00:02:54.460 SO libspdk_event_bdev.so.6.0 00:02:54.460 SYMLINK libspdk_event_bdev.so 00:02:54.719 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:54.719 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:54.719 CC module/event/subsystems/scsi/scsi.o 00:02:54.719 CC module/event/subsystems/ublk/ublk.o 00:02:54.719 CC module/event/subsystems/nbd/nbd.o 00:02:54.978 LIB libspdk_event_ublk.a 00:02:54.978 LIB libspdk_event_nbd.a 00:02:54.978 LIB libspdk_event_scsi.a 00:02:54.978 SO libspdk_event_nbd.so.6.0 00:02:54.978 SO libspdk_event_ublk.so.3.0 00:02:54.978 LIB libspdk_event_nvmf.a 00:02:54.978 SO libspdk_event_scsi.so.6.0 00:02:54.978 SYMLINK libspdk_event_nbd.so 00:02:54.978 SO libspdk_event_nvmf.so.6.0 00:02:54.978 SYMLINK libspdk_event_ublk.so 00:02:54.978 SYMLINK libspdk_event_scsi.so 00:02:54.978 SYMLINK libspdk_event_nvmf.so 00:02:55.548 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:55.548 CC module/event/subsystems/iscsi/iscsi.o 00:02:55.548 LIB libspdk_event_vhost_scsi.a 00:02:55.548 LIB libspdk_event_iscsi.a 00:02:55.548 SO libspdk_event_vhost_scsi.so.3.0 00:02:55.548 SO libspdk_event_iscsi.so.6.0 00:02:55.807 SYMLINK libspdk_event_vhost_scsi.so 00:02:55.807 SYMLINK libspdk_event_iscsi.so 00:02:56.065 SO libspdk.so.6.0 00:02:56.065 SYMLINK libspdk.so 00:02:56.322 CXX app/trace/trace.o 00:02:56.322 CC app/spdk_lspci/spdk_lspci.o 00:02:56.322 CC app/spdk_nvme_identify/identify.o 00:02:56.322 CC app/trace_record/trace_record.o 00:02:56.322 CC app/spdk_nvme_perf/perf.o 00:02:56.322 CC app/iscsi_tgt/iscsi_tgt.o 00:02:56.322 CC app/nvmf_tgt/nvmf_main.o 00:02:56.322 CC app/spdk_tgt/spdk_tgt.o 00:02:56.322 CC test/thread/poller_perf/poller_perf.o 00:02:56.322 CC examples/util/zipf/zipf.o 00:02:56.322 LINK spdk_lspci 00:02:56.580 LINK nvmf_tgt 00:02:56.580 LINK spdk_trace_record 00:02:56.580 LINK zipf 00:02:56.580 LINK poller_perf 00:02:56.580 LINK iscsi_tgt 00:02:56.580 LINK spdk_tgt 00:02:56.580 LINK spdk_trace 00:02:56.580 CC app/spdk_nvme_discover/discovery_aer.o 00:02:56.838 CC app/spdk_top/spdk_top.o 00:02:56.838 CC app/spdk_dd/spdk_dd.o 00:02:56.838 LINK spdk_nvme_discover 00:02:56.838 CC examples/ioat/perf/perf.o 00:02:56.838 CC test/dma/test_dma/test_dma.o 00:02:56.838 CC test/app/bdev_svc/bdev_svc.o 00:02:56.838 CC app/fio/nvme/fio_plugin.o 00:02:56.838 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:57.095 LINK spdk_nvme_identify 00:02:57.095 LINK spdk_nvme_perf 00:02:57.095 LINK bdev_svc 00:02:57.095 LINK ioat_perf 00:02:57.095 CC app/fio/bdev/fio_plugin.o 00:02:57.095 LINK test_dma 00:02:57.358 LINK spdk_dd 00:02:57.358 TEST_HEADER include/spdk/accel.h 00:02:57.358 TEST_HEADER include/spdk/accel_module.h 00:02:57.358 TEST_HEADER include/spdk/assert.h 00:02:57.358 TEST_HEADER include/spdk/barrier.h 00:02:57.358 TEST_HEADER include/spdk/base64.h 00:02:57.358 TEST_HEADER include/spdk/bdev.h 00:02:57.358 TEST_HEADER include/spdk/bdev_module.h 00:02:57.358 TEST_HEADER include/spdk/bdev_zone.h 00:02:57.358 TEST_HEADER include/spdk/bit_array.h 00:02:57.358 TEST_HEADER include/spdk/bit_pool.h 00:02:57.358 TEST_HEADER include/spdk/blob_bdev.h 00:02:57.358 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:57.358 TEST_HEADER include/spdk/blobfs.h 00:02:57.358 TEST_HEADER include/spdk/blob.h 00:02:57.358 TEST_HEADER include/spdk/conf.h 00:02:57.358 TEST_HEADER include/spdk/config.h 00:02:57.358 TEST_HEADER include/spdk/cpuset.h 00:02:57.358 TEST_HEADER include/spdk/crc16.h 00:02:57.358 CC examples/ioat/verify/verify.o 00:02:57.358 TEST_HEADER include/spdk/crc32.h 00:02:57.358 TEST_HEADER include/spdk/crc64.h 00:02:57.358 TEST_HEADER include/spdk/dif.h 00:02:57.358 TEST_HEADER include/spdk/dma.h 00:02:57.358 TEST_HEADER include/spdk/endian.h 00:02:57.358 TEST_HEADER include/spdk/env_dpdk.h 00:02:57.358 TEST_HEADER include/spdk/env.h 00:02:57.358 TEST_HEADER include/spdk/event.h 00:02:57.358 TEST_HEADER include/spdk/fd_group.h 00:02:57.358 TEST_HEADER include/spdk/fd.h 00:02:57.358 LINK nvme_fuzz 00:02:57.358 TEST_HEADER include/spdk/file.h 00:02:57.358 TEST_HEADER include/spdk/ftl.h 00:02:57.358 TEST_HEADER include/spdk/gpt_spec.h 00:02:57.358 TEST_HEADER include/spdk/hexlify.h 00:02:57.358 TEST_HEADER include/spdk/histogram_data.h 00:02:57.358 TEST_HEADER include/spdk/idxd.h 00:02:57.358 TEST_HEADER include/spdk/idxd_spec.h 00:02:57.358 TEST_HEADER include/spdk/init.h 00:02:57.358 TEST_HEADER include/spdk/ioat.h 00:02:57.358 TEST_HEADER include/spdk/ioat_spec.h 00:02:57.358 TEST_HEADER include/spdk/iscsi_spec.h 00:02:57.358 TEST_HEADER include/spdk/json.h 00:02:57.358 TEST_HEADER include/spdk/jsonrpc.h 00:02:57.358 CC test/app/histogram_perf/histogram_perf.o 00:02:57.358 TEST_HEADER include/spdk/keyring.h 00:02:57.358 TEST_HEADER include/spdk/keyring_module.h 00:02:57.358 TEST_HEADER include/spdk/likely.h 00:02:57.358 TEST_HEADER include/spdk/log.h 00:02:57.358 TEST_HEADER include/spdk/lvol.h 00:02:57.358 TEST_HEADER include/spdk/memory.h 00:02:57.358 TEST_HEADER include/spdk/mmio.h 00:02:57.358 TEST_HEADER include/spdk/nbd.h 00:02:57.358 TEST_HEADER include/spdk/net.h 00:02:57.358 TEST_HEADER include/spdk/notify.h 00:02:57.358 TEST_HEADER include/spdk/nvme.h 00:02:57.358 TEST_HEADER include/spdk/nvme_intel.h 00:02:57.358 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:57.358 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:57.358 TEST_HEADER include/spdk/nvme_spec.h 00:02:57.358 TEST_HEADER include/spdk/nvme_zns.h 00:02:57.358 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:57.358 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:57.358 TEST_HEADER include/spdk/nvmf.h 00:02:57.358 TEST_HEADER include/spdk/nvmf_spec.h 00:02:57.358 TEST_HEADER include/spdk/nvmf_transport.h 00:02:57.358 TEST_HEADER include/spdk/opal.h 00:02:57.358 TEST_HEADER include/spdk/opal_spec.h 00:02:57.358 TEST_HEADER include/spdk/pci_ids.h 00:02:57.358 TEST_HEADER include/spdk/pipe.h 00:02:57.358 TEST_HEADER include/spdk/queue.h 00:02:57.358 TEST_HEADER include/spdk/reduce.h 00:02:57.358 TEST_HEADER include/spdk/rpc.h 00:02:57.358 TEST_HEADER include/spdk/scheduler.h 00:02:57.358 TEST_HEADER include/spdk/scsi.h 00:02:57.358 TEST_HEADER include/spdk/scsi_spec.h 00:02:57.358 TEST_HEADER include/spdk/sock.h 00:02:57.358 TEST_HEADER include/spdk/stdinc.h 00:02:57.358 LINK spdk_nvme 00:02:57.358 TEST_HEADER include/spdk/string.h 00:02:57.358 TEST_HEADER include/spdk/thread.h 00:02:57.358 TEST_HEADER include/spdk/trace.h 00:02:57.358 TEST_HEADER include/spdk/trace_parser.h 00:02:57.358 TEST_HEADER include/spdk/tree.h 00:02:57.358 TEST_HEADER include/spdk/ublk.h 00:02:57.358 TEST_HEADER include/spdk/util.h 00:02:57.358 TEST_HEADER include/spdk/uuid.h 00:02:57.358 TEST_HEADER include/spdk/version.h 00:02:57.359 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:57.359 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:57.359 TEST_HEADER include/spdk/vhost.h 00:02:57.359 TEST_HEADER include/spdk/vmd.h 00:02:57.359 TEST_HEADER include/spdk/xor.h 00:02:57.359 TEST_HEADER include/spdk/zipf.h 00:02:57.359 CXX test/cpp_headers/accel.o 00:02:57.619 LINK histogram_perf 00:02:57.619 LINK verify 00:02:57.619 CC test/env/vtophys/vtophys.o 00:02:57.619 CC test/env/mem_callbacks/mem_callbacks.o 00:02:57.619 CC test/app/jsoncat/jsoncat.o 00:02:57.619 LINK spdk_bdev 00:02:57.619 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:57.619 CC test/app/stub/stub.o 00:02:57.619 CXX test/cpp_headers/accel_module.o 00:02:57.619 LINK vtophys 00:02:57.619 LINK spdk_top 00:02:57.619 CXX test/cpp_headers/assert.o 00:02:57.619 LINK jsoncat 00:02:57.877 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:57.877 LINK stub 00:02:57.877 CXX test/cpp_headers/barrier.o 00:02:57.877 CC examples/vmd/lsvmd/lsvmd.o 00:02:57.877 LINK env_dpdk_post_init 00:02:57.877 CC test/rpc_client/rpc_client_test.o 00:02:57.877 LINK lsvmd 00:02:57.877 CC app/vhost/vhost.o 00:02:57.877 CC test/event/event_perf/event_perf.o 00:02:57.877 CXX test/cpp_headers/base64.o 00:02:58.145 CC test/nvme/aer/aer.o 00:02:58.145 LINK mem_callbacks 00:02:58.145 CC test/event/reactor/reactor.o 00:02:58.145 LINK event_perf 00:02:58.146 LINK rpc_client_test 00:02:58.146 CXX test/cpp_headers/bdev.o 00:02:58.146 CXX test/cpp_headers/bdev_module.o 00:02:58.146 LINK vhost 00:02:58.146 LINK reactor 00:02:58.146 CC examples/vmd/led/led.o 00:02:58.404 LINK aer 00:02:58.404 CC test/env/memory/memory_ut.o 00:02:58.404 CXX test/cpp_headers/bdev_zone.o 00:02:58.404 CC test/env/pci/pci_ut.o 00:02:58.404 CC test/nvme/reset/reset.o 00:02:58.404 LINK led 00:02:58.404 CC test/event/reactor_perf/reactor_perf.o 00:02:58.404 CC test/nvme/sgl/sgl.o 00:02:58.404 CC examples/idxd/perf/perf.o 00:02:58.662 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:58.662 CXX test/cpp_headers/bit_array.o 00:02:58.662 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:58.662 LINK reactor_perf 00:02:58.662 LINK reset 00:02:58.662 CXX test/cpp_headers/bit_pool.o 00:02:58.662 LINK sgl 00:02:58.662 LINK pci_ut 00:02:58.662 CC test/event/app_repeat/app_repeat.o 00:02:58.921 LINK idxd_perf 00:02:58.921 CXX test/cpp_headers/blob_bdev.o 00:02:58.921 CC test/event/scheduler/scheduler.o 00:02:58.921 LINK app_repeat 00:02:58.921 LINK vhost_fuzz 00:02:58.921 CC test/nvme/e2edp/nvme_dp.o 00:02:58.921 LINK iscsi_fuzz 00:02:58.921 CC test/nvme/overhead/overhead.o 00:02:58.921 CXX test/cpp_headers/blobfs_bdev.o 00:02:58.921 CXX test/cpp_headers/blobfs.o 00:02:59.180 LINK scheduler 00:02:59.180 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:59.180 CC test/nvme/err_injection/err_injection.o 00:02:59.180 CXX test/cpp_headers/blob.o 00:02:59.180 CC test/nvme/startup/startup.o 00:02:59.180 LINK nvme_dp 00:02:59.180 LINK interrupt_tgt 00:02:59.180 LINK memory_ut 00:02:59.180 LINK overhead 00:02:59.180 CC test/nvme/reserve/reserve.o 00:02:59.180 CC test/nvme/simple_copy/simple_copy.o 00:02:59.438 CXX test/cpp_headers/conf.o 00:02:59.438 LINK err_injection 00:02:59.438 LINK startup 00:02:59.438 CC test/nvme/connect_stress/connect_stress.o 00:02:59.438 CC test/nvme/boot_partition/boot_partition.o 00:02:59.438 LINK reserve 00:02:59.438 CXX test/cpp_headers/config.o 00:02:59.438 LINK simple_copy 00:02:59.438 LINK connect_stress 00:02:59.438 CXX test/cpp_headers/cpuset.o 00:02:59.697 CC examples/sock/hello_world/hello_sock.o 00:02:59.697 CC test/nvme/compliance/nvme_compliance.o 00:02:59.697 LINK boot_partition 00:02:59.697 CC examples/thread/thread/thread_ex.o 00:02:59.697 CC test/nvme/fused_ordering/fused_ordering.o 00:02:59.697 CC test/accel/dif/dif.o 00:02:59.697 CXX test/cpp_headers/crc16.o 00:02:59.697 LINK hello_sock 00:02:59.697 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:59.697 LINK fused_ordering 00:02:59.955 CC test/blobfs/mkfs/mkfs.o 00:02:59.955 CXX test/cpp_headers/crc32.o 00:02:59.955 CC test/nvme/fdp/fdp.o 00:02:59.955 LINK thread 00:02:59.955 LINK nvme_compliance 00:02:59.955 CXX test/cpp_headers/crc64.o 00:02:59.955 CC test/lvol/esnap/esnap.o 00:02:59.955 CXX test/cpp_headers/dif.o 00:02:59.955 LINK doorbell_aers 00:02:59.955 LINK mkfs 00:03:00.212 LINK dif 00:03:00.212 CXX test/cpp_headers/dma.o 00:03:00.212 CXX test/cpp_headers/endian.o 00:03:00.212 LINK fdp 00:03:00.212 CC test/nvme/cuse/cuse.o 00:03:00.212 CC examples/nvme/hello_world/hello_world.o 00:03:00.212 CC examples/nvme/reconnect/reconnect.o 00:03:00.212 CXX test/cpp_headers/env_dpdk.o 00:03:00.212 CC examples/accel/perf/accel_perf.o 00:03:00.469 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:00.469 CC examples/nvme/arbitration/arbitration.o 00:03:00.469 LINK hello_world 00:03:00.469 CC examples/nvme/hotplug/hotplug.o 00:03:00.469 CXX test/cpp_headers/env.o 00:03:00.469 CC test/bdev/bdevio/bdevio.o 00:03:00.469 LINK reconnect 00:03:00.743 LINK arbitration 00:03:00.743 CXX test/cpp_headers/event.o 00:03:00.743 LINK hotplug 00:03:00.743 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:00.743 LINK accel_perf 00:03:00.743 CXX test/cpp_headers/fd_group.o 00:03:00.743 LINK nvme_manage 00:03:00.743 LINK bdevio 00:03:01.000 CXX test/cpp_headers/fd.o 00:03:01.000 LINK cmb_copy 00:03:01.000 CXX test/cpp_headers/file.o 00:03:01.000 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:01.000 CC examples/nvme/abort/abort.o 00:03:01.000 CXX test/cpp_headers/ftl.o 00:03:01.000 CXX test/cpp_headers/gpt_spec.o 00:03:01.000 CXX test/cpp_headers/hexlify.o 00:03:01.000 CXX test/cpp_headers/histogram_data.o 00:03:01.000 CXX test/cpp_headers/idxd.o 00:03:01.000 CXX test/cpp_headers/idxd_spec.o 00:03:01.000 LINK pmr_persistence 00:03:01.255 CXX test/cpp_headers/init.o 00:03:01.255 CXX test/cpp_headers/ioat.o 00:03:01.255 CXX test/cpp_headers/ioat_spec.o 00:03:01.255 CXX test/cpp_headers/iscsi_spec.o 00:03:01.255 CXX test/cpp_headers/json.o 00:03:01.255 CXX test/cpp_headers/jsonrpc.o 00:03:01.255 CXX test/cpp_headers/keyring.o 00:03:01.255 LINK abort 00:03:01.255 CXX test/cpp_headers/keyring_module.o 00:03:01.255 LINK cuse 00:03:01.255 CXX test/cpp_headers/likely.o 00:03:01.255 CXX test/cpp_headers/log.o 00:03:01.512 CXX test/cpp_headers/lvol.o 00:03:01.512 CXX test/cpp_headers/memory.o 00:03:01.512 CXX test/cpp_headers/mmio.o 00:03:01.512 CXX test/cpp_headers/nbd.o 00:03:01.512 CXX test/cpp_headers/net.o 00:03:01.512 CXX test/cpp_headers/notify.o 00:03:01.512 CXX test/cpp_headers/nvme.o 00:03:01.512 CXX test/cpp_headers/nvme_intel.o 00:03:01.512 CC examples/blob/hello_world/hello_blob.o 00:03:01.512 CC examples/bdev/hello_world/hello_bdev.o 00:03:01.512 CC examples/blob/cli/blobcli.o 00:03:01.512 CXX test/cpp_headers/nvme_ocssd.o 00:03:01.769 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:01.769 CC examples/bdev/bdevperf/bdevperf.o 00:03:01.769 CXX test/cpp_headers/nvme_spec.o 00:03:01.769 CXX test/cpp_headers/nvme_zns.o 00:03:01.769 CXX test/cpp_headers/nvmf_cmd.o 00:03:01.769 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:01.769 LINK hello_blob 00:03:01.769 LINK hello_bdev 00:03:01.769 CXX test/cpp_headers/nvmf.o 00:03:02.025 CXX test/cpp_headers/nvmf_spec.o 00:03:02.025 CXX test/cpp_headers/nvmf_transport.o 00:03:02.025 CXX test/cpp_headers/opal.o 00:03:02.025 CXX test/cpp_headers/opal_spec.o 00:03:02.025 CXX test/cpp_headers/pci_ids.o 00:03:02.025 CXX test/cpp_headers/pipe.o 00:03:02.025 CXX test/cpp_headers/queue.o 00:03:02.025 LINK blobcli 00:03:02.025 CXX test/cpp_headers/reduce.o 00:03:02.025 CXX test/cpp_headers/rpc.o 00:03:02.025 CXX test/cpp_headers/scheduler.o 00:03:02.282 CXX test/cpp_headers/scsi.o 00:03:02.282 CXX test/cpp_headers/scsi_spec.o 00:03:02.282 CXX test/cpp_headers/sock.o 00:03:02.282 CXX test/cpp_headers/stdinc.o 00:03:02.282 CXX test/cpp_headers/string.o 00:03:02.282 CXX test/cpp_headers/thread.o 00:03:02.282 CXX test/cpp_headers/trace.o 00:03:02.282 CXX test/cpp_headers/trace_parser.o 00:03:02.282 LINK bdevperf 00:03:02.282 CXX test/cpp_headers/tree.o 00:03:02.282 CXX test/cpp_headers/ublk.o 00:03:02.282 CXX test/cpp_headers/util.o 00:03:02.282 CXX test/cpp_headers/uuid.o 00:03:02.282 CXX test/cpp_headers/version.o 00:03:02.282 CXX test/cpp_headers/vfio_user_pci.o 00:03:02.539 CXX test/cpp_headers/vfio_user_spec.o 00:03:02.539 CXX test/cpp_headers/vhost.o 00:03:02.539 CXX test/cpp_headers/vmd.o 00:03:02.539 CXX test/cpp_headers/xor.o 00:03:02.539 CXX test/cpp_headers/zipf.o 00:03:02.797 CC examples/nvmf/nvmf/nvmf.o 00:03:03.360 LINK nvmf 00:03:04.732 LINK esnap 00:03:05.102 00:03:05.102 real 1m1.748s 00:03:05.102 user 5m17.045s 00:03:05.102 sys 1m42.877s 00:03:05.102 ************************************ 00:03:05.102 END TEST make 00:03:05.102 ************************************ 00:03:05.102 22:14:18 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:05.102 22:14:18 make -- common/autotest_common.sh@10 -- $ set +x 00:03:05.102 22:14:18 -- common/autotest_common.sh@1142 -- $ return 0 00:03:05.102 22:14:18 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:05.103 22:14:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:05.103 22:14:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:05.103 22:14:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.103 22:14:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:05.103 22:14:18 -- pm/common@44 -- $ pid=5142 00:03:05.103 22:14:18 -- pm/common@50 -- $ kill -TERM 5142 00:03:05.103 22:14:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.103 22:14:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:05.103 22:14:18 -- pm/common@44 -- $ pid=5144 00:03:05.103 22:14:18 -- pm/common@50 -- $ kill -TERM 5144 00:03:05.103 22:14:18 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:05.103 22:14:18 -- nvmf/common.sh@7 -- # uname -s 00:03:05.103 22:14:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:05.103 22:14:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:05.103 22:14:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:05.103 22:14:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:05.103 22:14:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:05.103 22:14:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:05.103 22:14:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:05.103 22:14:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:05.103 22:14:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:05.103 22:14:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:05.103 22:14:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:03:05.103 22:14:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:03:05.103 22:14:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:05.103 22:14:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:05.103 22:14:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:05.103 22:14:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:05.103 22:14:18 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:05.103 22:14:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:05.103 22:14:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:05.103 22:14:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:05.103 22:14:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.103 22:14:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.103 22:14:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.103 22:14:18 -- paths/export.sh@5 -- # export PATH 00:03:05.103 22:14:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.103 22:14:18 -- nvmf/common.sh@47 -- # : 0 00:03:05.103 22:14:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:05.103 22:14:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:05.103 22:14:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:05.103 22:14:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:05.103 22:14:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:05.103 22:14:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:05.103 22:14:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:05.103 22:14:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:05.103 22:14:18 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:05.103 22:14:18 -- spdk/autotest.sh@32 -- # uname -s 00:03:05.103 22:14:18 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:05.103 22:14:18 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:05.103 22:14:18 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:05.103 22:14:18 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:05.103 22:14:18 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:05.103 22:14:18 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:05.103 22:14:18 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:05.103 22:14:18 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:05.103 22:14:18 -- spdk/autotest.sh@48 -- # udevadm_pid=52784 00:03:05.103 22:14:18 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:05.103 22:14:18 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:05.103 22:14:18 -- pm/common@17 -- # local monitor 00:03:05.103 22:14:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.103 22:14:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.103 22:14:18 -- pm/common@25 -- # sleep 1 00:03:05.103 22:14:18 -- pm/common@21 -- # date +%s 00:03:05.103 22:14:18 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721081658 00:03:05.103 22:14:18 -- pm/common@21 -- # date +%s 00:03:05.103 22:14:18 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721081658 00:03:05.103 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721081658_collect-cpu-load.pm.log 00:03:05.360 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721081658_collect-vmstat.pm.log 00:03:06.294 22:14:19 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:06.294 22:14:19 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:06.294 22:14:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:06.294 22:14:19 -- common/autotest_common.sh@10 -- # set +x 00:03:06.294 22:14:19 -- spdk/autotest.sh@59 -- # create_test_list 00:03:06.294 22:14:19 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:06.294 22:14:19 -- common/autotest_common.sh@10 -- # set +x 00:03:06.294 22:14:19 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:06.294 22:14:19 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:06.294 22:14:19 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:06.294 22:14:19 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:06.294 22:14:19 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:06.294 22:14:19 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:06.294 22:14:19 -- common/autotest_common.sh@1455 -- # uname 00:03:06.294 22:14:19 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:06.294 22:14:19 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:06.294 22:14:19 -- common/autotest_common.sh@1475 -- # uname 00:03:06.294 22:14:19 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:06.294 22:14:19 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:06.294 22:14:19 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:06.294 22:14:19 -- spdk/autotest.sh@72 -- # hash lcov 00:03:06.294 22:14:19 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:06.294 22:14:19 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:06.294 --rc lcov_branch_coverage=1 00:03:06.294 --rc lcov_function_coverage=1 00:03:06.294 --rc genhtml_branch_coverage=1 00:03:06.294 --rc genhtml_function_coverage=1 00:03:06.294 --rc genhtml_legend=1 00:03:06.294 --rc geninfo_all_blocks=1 00:03:06.294 ' 00:03:06.294 22:14:19 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:06.294 --rc lcov_branch_coverage=1 00:03:06.294 --rc lcov_function_coverage=1 00:03:06.294 --rc genhtml_branch_coverage=1 00:03:06.294 --rc genhtml_function_coverage=1 00:03:06.294 --rc genhtml_legend=1 00:03:06.294 --rc geninfo_all_blocks=1 00:03:06.294 ' 00:03:06.294 22:14:19 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:06.294 --rc lcov_branch_coverage=1 00:03:06.294 --rc lcov_function_coverage=1 00:03:06.294 --rc genhtml_branch_coverage=1 00:03:06.294 --rc genhtml_function_coverage=1 00:03:06.294 --rc genhtml_legend=1 00:03:06.294 --rc geninfo_all_blocks=1 00:03:06.294 --no-external' 00:03:06.294 22:14:19 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:06.294 --rc lcov_branch_coverage=1 00:03:06.294 --rc lcov_function_coverage=1 00:03:06.294 --rc genhtml_branch_coverage=1 00:03:06.294 --rc genhtml_function_coverage=1 00:03:06.294 --rc genhtml_legend=1 00:03:06.294 --rc geninfo_all_blocks=1 00:03:06.294 --no-external' 00:03:06.294 22:14:19 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:06.294 lcov: LCOV version 1.14 00:03:06.294 22:14:19 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:21.181 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:21.181 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:33.379 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:33.379 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:33.379 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:33.379 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:33.379 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:33.379 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:33.379 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:33.379 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:33.379 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:33.379 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:33.379 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:33.379 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:33.379 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:33.379 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:33.379 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:33.379 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:33.379 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:33.379 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:33.379 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:33.379 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:33.379 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:33.379 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:33.379 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:33.379 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:33.379 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:33.379 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:33.379 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:33.379 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:33.379 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:33.379 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:33.379 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:33.379 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:33.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:33.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:33.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:33.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:33.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:33.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:33.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:33.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:33.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:33.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:33.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:33.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:33.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:33.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:33.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:33.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:33.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:33.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:33.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:36.688 22:14:49 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:36.688 22:14:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:36.688 22:14:49 -- common/autotest_common.sh@10 -- # set +x 00:03:36.688 22:14:49 -- spdk/autotest.sh@91 -- # rm -f 00:03:36.688 22:14:49 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:36.946 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:36.946 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:36.946 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:36.946 22:14:50 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:36.946 22:14:50 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:36.946 22:14:50 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:36.946 22:14:50 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:36.946 22:14:50 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:36.946 22:14:50 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:36.946 22:14:50 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:36.946 22:14:50 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:36.946 22:14:50 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:37.205 22:14:50 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:37.206 22:14:50 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:37.206 22:14:50 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:37.206 22:14:50 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:37.206 22:14:50 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:37.206 22:14:50 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:37.206 22:14:50 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:37.206 22:14:50 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:37.206 22:14:50 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:37.206 22:14:50 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:37.206 22:14:50 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:37.206 22:14:50 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:37.206 22:14:50 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:37.206 22:14:50 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:37.206 22:14:50 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:37.206 22:14:50 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:37.206 22:14:50 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:37.206 22:14:50 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:37.206 22:14:50 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:37.206 22:14:50 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:37.206 22:14:50 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:37.206 No valid GPT data, bailing 00:03:37.206 22:14:50 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:37.206 22:14:50 -- scripts/common.sh@391 -- # pt= 00:03:37.206 22:14:50 -- scripts/common.sh@392 -- # return 1 00:03:37.206 22:14:50 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:37.206 1+0 records in 00:03:37.206 1+0 records out 00:03:37.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00461276 s, 227 MB/s 00:03:37.206 22:14:50 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:37.206 22:14:50 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:37.206 22:14:50 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:37.206 22:14:50 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:37.206 22:14:50 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:37.206 No valid GPT data, bailing 00:03:37.206 22:14:50 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:37.206 22:14:50 -- scripts/common.sh@391 -- # pt= 00:03:37.206 22:14:50 -- scripts/common.sh@392 -- # return 1 00:03:37.206 22:14:50 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:37.206 1+0 records in 00:03:37.206 1+0 records out 00:03:37.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00425923 s, 246 MB/s 00:03:37.206 22:14:50 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:37.206 22:14:50 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:37.206 22:14:50 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:37.206 22:14:50 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:37.206 22:14:50 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:37.206 No valid GPT data, bailing 00:03:37.206 22:14:50 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:37.206 22:14:50 -- scripts/common.sh@391 -- # pt= 00:03:37.206 22:14:50 -- scripts/common.sh@392 -- # return 1 00:03:37.206 22:14:50 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:37.206 1+0 records in 00:03:37.206 1+0 records out 00:03:37.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00447187 s, 234 MB/s 00:03:37.206 22:14:50 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:37.206 22:14:50 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:37.206 22:14:50 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:37.206 22:14:50 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:37.206 22:14:50 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:37.488 No valid GPT data, bailing 00:03:37.488 22:14:50 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:37.488 22:14:50 -- scripts/common.sh@391 -- # pt= 00:03:37.488 22:14:50 -- scripts/common.sh@392 -- # return 1 00:03:37.488 22:14:50 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:37.488 1+0 records in 00:03:37.488 1+0 records out 00:03:37.488 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00421053 s, 249 MB/s 00:03:37.488 22:14:50 -- spdk/autotest.sh@118 -- # sync 00:03:37.488 22:14:50 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:37.488 22:14:50 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:37.488 22:14:50 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:40.783 22:14:53 -- spdk/autotest.sh@124 -- # uname -s 00:03:40.783 22:14:53 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:40.783 22:14:53 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:40.783 22:14:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.783 22:14:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.783 22:14:53 -- common/autotest_common.sh@10 -- # set +x 00:03:40.783 ************************************ 00:03:40.783 START TEST setup.sh 00:03:40.783 ************************************ 00:03:40.783 22:14:53 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:40.783 * Looking for test storage... 00:03:40.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:40.783 22:14:53 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:40.783 22:14:53 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:40.783 22:14:53 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:40.783 22:14:53 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.783 22:14:53 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.783 22:14:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:40.783 ************************************ 00:03:40.783 START TEST acl 00:03:40.783 ************************************ 00:03:40.783 22:14:53 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:40.783 * Looking for test storage... 00:03:40.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:40.783 22:14:53 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:40.783 22:14:53 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:40.783 22:14:53 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:40.783 22:14:53 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:40.783 22:14:53 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:40.783 22:14:53 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:40.783 22:14:53 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:40.783 22:14:53 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:40.783 22:14:53 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:40.783 22:14:53 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:40.783 22:14:53 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:40.783 22:14:53 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:40.783 22:14:53 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:40.783 22:14:53 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:40.783 22:14:53 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:40.783 22:14:53 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:40.783 22:14:53 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:40.783 22:14:53 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:40.783 22:14:53 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:40.783 22:14:53 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:40.783 22:14:53 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:40.783 22:14:53 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:40.783 22:14:53 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:40.783 22:14:53 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:40.783 22:14:53 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:40.783 22:14:53 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:40.783 22:14:53 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:40.783 22:14:53 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:40.783 22:14:53 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:40.783 22:14:53 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.783 22:14:53 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:41.350 22:14:54 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:41.350 22:14:54 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:41.351 22:14:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.351 22:14:54 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:41.351 22:14:54 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.351 22:14:54 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:42.286 22:14:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:42.286 22:14:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:42.286 22:14:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.286 Hugepages 00:03:42.286 node hugesize free / total 00:03:42.286 22:14:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:42.286 22:14:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:42.286 22:14:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.286 00:03:42.286 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:42.286 22:14:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:42.286 22:14:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:42.286 22:14:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.286 22:14:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:42.286 22:14:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:42.286 22:14:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.286 22:14:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.559 22:14:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:42.559 22:14:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:42.559 22:14:55 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:42.559 22:14:55 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:42.559 22:14:55 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:42.559 22:14:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.559 22:14:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:42.559 22:14:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:42.559 22:14:56 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:42.559 22:14:56 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:42.559 22:14:56 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:42.559 22:14:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.559 22:14:56 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:42.559 22:14:56 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:42.559 22:14:56 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.559 22:14:56 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.559 22:14:56 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:42.559 ************************************ 00:03:42.559 START TEST denied 00:03:42.559 ************************************ 00:03:42.559 22:14:56 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:42.559 22:14:56 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:42.559 22:14:56 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:42.559 22:14:56 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.559 22:14:56 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:42.559 22:14:56 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:43.516 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:43.516 22:14:57 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:43.516 22:14:57 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:43.516 22:14:57 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:43.516 22:14:57 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:43.516 22:14:57 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:43.516 22:14:57 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:43.516 22:14:57 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:43.516 22:14:57 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:43.516 22:14:57 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:43.516 22:14:57 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:44.452 00:03:44.452 real 0m1.874s 00:03:44.452 user 0m0.723s 00:03:44.452 sys 0m1.134s 00:03:44.452 22:14:57 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.452 22:14:57 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:44.452 ************************************ 00:03:44.452 END TEST denied 00:03:44.452 ************************************ 00:03:44.452 22:14:57 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:44.452 22:14:57 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:44.452 22:14:57 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.452 22:14:57 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.452 22:14:57 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:44.452 ************************************ 00:03:44.452 START TEST allowed 00:03:44.452 ************************************ 00:03:44.452 22:14:57 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:44.452 22:14:57 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:44.452 22:14:57 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:44.452 22:14:57 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.452 22:14:57 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:44.452 22:14:57 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:45.386 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:45.386 22:14:58 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:03:45.386 22:14:58 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:45.386 22:14:58 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:45.386 22:14:58 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:45.386 22:14:58 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:45.386 22:14:58 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:45.386 22:14:58 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:45.386 22:14:58 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:45.386 22:14:58 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:45.386 22:14:58 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:46.322 00:03:46.322 real 0m1.876s 00:03:46.322 user 0m0.742s 00:03:46.322 sys 0m1.167s 00:03:46.322 22:14:59 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.322 ************************************ 00:03:46.322 END TEST allowed 00:03:46.322 ************************************ 00:03:46.322 22:14:59 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:46.322 22:14:59 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:46.322 ************************************ 00:03:46.322 END TEST acl 00:03:46.322 ************************************ 00:03:46.322 00:03:46.322 real 0m6.090s 00:03:46.322 user 0m2.442s 00:03:46.322 sys 0m3.697s 00:03:46.322 22:14:59 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.322 22:14:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:46.582 22:14:59 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:46.582 22:14:59 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:46.582 22:14:59 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.582 22:14:59 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.582 22:14:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:46.582 ************************************ 00:03:46.582 START TEST hugepages 00:03:46.582 ************************************ 00:03:46.582 22:14:59 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:46.582 * Looking for test storage... 00:03:46.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6019972 kB' 'MemAvailable: 7403040 kB' 'Buffers: 2436 kB' 'Cached: 1597228 kB' 'SwapCached: 0 kB' 'Active: 442972 kB' 'Inactive: 1268348 kB' 'Active(anon): 122144 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268348 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 113372 kB' 'Mapped: 48676 kB' 'Shmem: 10488 kB' 'KReclaimable: 61664 kB' 'Slab: 135616 kB' 'SReclaimable: 61664 kB' 'SUnreclaim: 73952 kB' 'KernelStack: 6252 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 347976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.582 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:46.583 22:15:00 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:46.583 22:15:00 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.583 22:15:00 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.583 22:15:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.583 ************************************ 00:03:46.583 START TEST default_setup 00:03:46.583 ************************************ 00:03:46.583 22:15:00 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:46.583 22:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:46.584 22:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:46.584 22:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:46.584 22:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:46.584 22:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:46.584 22:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:46.584 22:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.584 22:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:46.584 22:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:46.584 22:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:46.584 22:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.584 22:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:46.584 22:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:46.584 22:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.584 22:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.584 22:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:46.584 22:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:46.584 22:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:46.584 22:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:46.584 22:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:46.584 22:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.584 22:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:47.570 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.570 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:47.845 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8124524 kB' 'MemAvailable: 9507408 kB' 'Buffers: 2436 kB' 'Cached: 1597220 kB' 'SwapCached: 0 kB' 'Active: 453764 kB' 'Inactive: 1268356 kB' 'Active(anon): 132936 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268356 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123824 kB' 'Mapped: 48828 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135288 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 74008 kB' 'KernelStack: 6364 kB' 'PageTables: 4516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 358312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.845 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8124528 kB' 'MemAvailable: 9507416 kB' 'Buffers: 2436 kB' 'Cached: 1597220 kB' 'SwapCached: 0 kB' 'Active: 453432 kB' 'Inactive: 1268360 kB' 'Active(anon): 132604 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123768 kB' 'Mapped: 48660 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135136 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73856 kB' 'KernelStack: 6316 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 358312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.847 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8124528 kB' 'MemAvailable: 9507416 kB' 'Buffers: 2436 kB' 'Cached: 1597220 kB' 'SwapCached: 0 kB' 'Active: 453400 kB' 'Inactive: 1268360 kB' 'Active(anon): 132572 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123784 kB' 'Mapped: 48680 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135124 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73844 kB' 'KernelStack: 6316 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 358312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.848 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.849 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:47.850 nr_hugepages=1024 00:03:47.850 resv_hugepages=0 00:03:47.850 surplus_hugepages=0 00:03:47.850 anon_hugepages=0 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8124528 kB' 'MemAvailable: 9507416 kB' 'Buffers: 2436 kB' 'Cached: 1597220 kB' 'SwapCached: 0 kB' 'Active: 453404 kB' 'Inactive: 1268360 kB' 'Active(anon): 132576 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123788 kB' 'Mapped: 48680 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135124 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73844 kB' 'KernelStack: 6316 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 358312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.851 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8124528 kB' 'MemUsed: 4117448 kB' 'SwapCached: 0 kB' 'Active: 453492 kB' 'Inactive: 1268360 kB' 'Active(anon): 132664 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1599656 kB' 'Mapped: 48680 kB' 'AnonPages: 123792 kB' 'Shmem: 10464 kB' 'KernelStack: 6316 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61280 kB' 'Slab: 135124 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.853 node0=1024 expecting 1024 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:47.853 ************************************ 00:03:47.853 END TEST default_setup 00:03:47.853 ************************************ 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:47.853 00:03:47.853 real 0m1.239s 00:03:47.853 user 0m0.540s 00:03:47.853 sys 0m0.647s 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:47.853 22:15:01 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:48.112 22:15:01 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:48.112 22:15:01 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:48.112 22:15:01 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.112 22:15:01 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.112 22:15:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:48.112 ************************************ 00:03:48.112 START TEST per_node_1G_alloc 00:03:48.112 ************************************ 00:03:48.112 22:15:01 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:48.112 22:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:48.112 22:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:48.112 22:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:48.112 22:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:48.113 22:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:48.113 22:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:48.113 22:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:48.113 22:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.113 22:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:48.113 22:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:48.113 22:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:48.113 22:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.113 22:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:48.113 22:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:48.113 22:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.113 22:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.113 22:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:48.113 22:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:48.113 22:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:48.113 22:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:48.113 22:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:48.113 22:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:48.113 22:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:48.113 22:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.113 22:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:48.370 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:48.633 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:48.633 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9174476 kB' 'MemAvailable: 10557368 kB' 'Buffers: 2436 kB' 'Cached: 1597220 kB' 'SwapCached: 0 kB' 'Active: 453156 kB' 'Inactive: 1268364 kB' 'Active(anon): 132328 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123684 kB' 'Mapped: 48808 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135124 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73844 kB' 'KernelStack: 6212 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 358312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.633 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.634 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9174476 kB' 'MemAvailable: 10557368 kB' 'Buffers: 2436 kB' 'Cached: 1597220 kB' 'SwapCached: 0 kB' 'Active: 452972 kB' 'Inactive: 1268364 kB' 'Active(anon): 132144 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123544 kB' 'Mapped: 48680 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135124 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73844 kB' 'KernelStack: 6256 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 358312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.635 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.636 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.637 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9174476 kB' 'MemAvailable: 10557368 kB' 'Buffers: 2436 kB' 'Cached: 1597220 kB' 'SwapCached: 0 kB' 'Active: 452972 kB' 'Inactive: 1268364 kB' 'Active(anon): 132144 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123284 kB' 'Mapped: 48680 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135124 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73844 kB' 'KernelStack: 6256 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 358312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.638 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.639 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:48.640 nr_hugepages=512 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:48.640 resv_hugepages=0 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.640 surplus_hugepages=0 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.640 anon_hugepages=0 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.640 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9174476 kB' 'MemAvailable: 10557368 kB' 'Buffers: 2436 kB' 'Cached: 1597220 kB' 'SwapCached: 0 kB' 'Active: 452936 kB' 'Inactive: 1268364 kB' 'Active(anon): 132108 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123508 kB' 'Mapped: 48680 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135124 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73844 kB' 'KernelStack: 6240 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 358312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.641 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.642 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9174476 kB' 'MemUsed: 3067500 kB' 'SwapCached: 0 kB' 'Active: 453020 kB' 'Inactive: 1268364 kB' 'Active(anon): 132192 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 1599656 kB' 'Mapped: 48680 kB' 'AnonPages: 123556 kB' 'Shmem: 10464 kB' 'KernelStack: 6208 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61280 kB' 'Slab: 135120 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73840 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.643 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.644 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.645 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.645 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.645 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.645 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.645 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.645 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.645 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.645 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.645 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.645 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.645 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.645 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.645 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.645 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.645 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.645 node0=512 expecting 512 00:03:48.645 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:48.645 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:48.645 00:03:48.645 real 0m0.737s 00:03:48.645 user 0m0.343s 00:03:48.645 sys 0m0.446s 00:03:48.645 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.645 ************************************ 00:03:48.645 22:15:02 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:48.645 END TEST per_node_1G_alloc 00:03:48.645 ************************************ 00:03:48.904 22:15:02 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:48.904 22:15:02 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:48.904 22:15:02 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.904 22:15:02 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.904 22:15:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:48.904 ************************************ 00:03:48.904 START TEST even_2G_alloc 00:03:48.904 ************************************ 00:03:48.904 22:15:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:48.904 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:48.904 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:48.904 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:48.904 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.904 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:48.904 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:48.904 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.904 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.904 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:48.904 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:48.904 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.904 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.904 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.904 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:48.904 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.904 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:48.904 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:48.904 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:48.904 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.904 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:48.904 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:48.904 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:48.904 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.904 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:49.163 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:49.426 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:49.426 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8140732 kB' 'MemAvailable: 9523628 kB' 'Buffers: 2436 kB' 'Cached: 1597224 kB' 'SwapCached: 0 kB' 'Active: 453356 kB' 'Inactive: 1268368 kB' 'Active(anon): 132528 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 123660 kB' 'Mapped: 48920 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135108 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73828 kB' 'KernelStack: 6316 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 358312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.426 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.427 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8140732 kB' 'MemAvailable: 9523628 kB' 'Buffers: 2436 kB' 'Cached: 1597224 kB' 'SwapCached: 0 kB' 'Active: 453136 kB' 'Inactive: 1268368 kB' 'Active(anon): 132308 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 123744 kB' 'Mapped: 48820 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135108 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73828 kB' 'KernelStack: 6316 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 358312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.428 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8140732 kB' 'MemAvailable: 9523628 kB' 'Buffers: 2436 kB' 'Cached: 1597224 kB' 'SwapCached: 0 kB' 'Active: 453480 kB' 'Inactive: 1268368 kB' 'Active(anon): 132652 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 124136 kB' 'Mapped: 49080 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135108 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73828 kB' 'KernelStack: 6364 kB' 'PageTables: 4628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 360612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.429 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.430 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:49.431 nr_hugepages=1024 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:49.431 resv_hugepages=0 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:49.431 surplus_hugepages=0 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:49.431 anon_hugepages=0 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8140732 kB' 'MemAvailable: 9523628 kB' 'Buffers: 2436 kB' 'Cached: 1597224 kB' 'SwapCached: 0 kB' 'Active: 453156 kB' 'Inactive: 1268368 kB' 'Active(anon): 132328 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 123572 kB' 'Mapped: 48836 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135096 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73816 kB' 'KernelStack: 6316 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 358312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.431 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.432 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.433 22:15:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8140732 kB' 'MemUsed: 4101244 kB' 'SwapCached: 0 kB' 'Active: 453108 kB' 'Inactive: 1268368 kB' 'Active(anon): 132280 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'FilePages: 1599660 kB' 'Mapped: 48836 kB' 'AnonPages: 123524 kB' 'Shmem: 10464 kB' 'KernelStack: 6316 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61280 kB' 'Slab: 135100 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73820 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.433 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:49.434 node0=1024 expecting 1024 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:49.434 00:03:49.434 real 0m0.727s 00:03:49.434 user 0m0.326s 00:03:49.434 sys 0m0.448s 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:49.434 22:15:03 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:49.434 ************************************ 00:03:49.434 END TEST even_2G_alloc 00:03:49.434 ************************************ 00:03:49.693 22:15:03 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:49.694 22:15:03 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:49.694 22:15:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:49.694 22:15:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.694 22:15:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:49.694 ************************************ 00:03:49.694 START TEST odd_alloc 00:03:49.694 ************************************ 00:03:49.694 22:15:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:49.694 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:49.694 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:49.694 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:49.694 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:49.694 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:49.694 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:49.694 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:49.694 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:49.694 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:49.694 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:49.694 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:49.694 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:49.694 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:49.694 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:49.694 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:49.694 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:49.694 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:49.694 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:49.694 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:49.694 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:49.694 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:49.694 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:49.694 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.694 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:50.264 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:50.264 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:50.264 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8133908 kB' 'MemAvailable: 9516804 kB' 'Buffers: 2436 kB' 'Cached: 1597224 kB' 'SwapCached: 0 kB' 'Active: 453448 kB' 'Inactive: 1268368 kB' 'Active(anon): 132620 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 123504 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135188 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73908 kB' 'KernelStack: 6256 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 358312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.264 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8134044 kB' 'MemAvailable: 9516940 kB' 'Buffers: 2436 kB' 'Cached: 1597224 kB' 'SwapCached: 0 kB' 'Active: 453260 kB' 'Inactive: 1268368 kB' 'Active(anon): 132432 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 123616 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135184 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73904 kB' 'KernelStack: 6256 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 358312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.265 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.266 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8134044 kB' 'MemAvailable: 9516940 kB' 'Buffers: 2436 kB' 'Cached: 1597224 kB' 'SwapCached: 0 kB' 'Active: 453244 kB' 'Inactive: 1268368 kB' 'Active(anon): 132416 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 123612 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135184 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73904 kB' 'KernelStack: 6256 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 358312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.267 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.268 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:50.269 nr_hugepages=1025 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:50.269 resv_hugepages=0 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:50.269 surplus_hugepages=0 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:50.269 anon_hugepages=0 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8134044 kB' 'MemAvailable: 9516940 kB' 'Buffers: 2436 kB' 'Cached: 1597224 kB' 'SwapCached: 0 kB' 'Active: 453312 kB' 'Inactive: 1268368 kB' 'Active(anon): 132484 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 123624 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135184 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73904 kB' 'KernelStack: 6256 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 358312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.269 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.270 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8134044 kB' 'MemUsed: 4107932 kB' 'SwapCached: 0 kB' 'Active: 453292 kB' 'Inactive: 1268368 kB' 'Active(anon): 132464 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 1599660 kB' 'Mapped: 48696 kB' 'AnonPages: 123604 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61280 kB' 'Slab: 135176 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73896 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.271 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.272 node0=1025 expecting 1025 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:50.272 00:03:50.272 real 0m0.757s 00:03:50.272 user 0m0.353s 00:03:50.272 sys 0m0.457s 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:50.272 22:15:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:50.272 ************************************ 00:03:50.272 END TEST odd_alloc 00:03:50.272 ************************************ 00:03:50.531 22:15:03 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:50.531 22:15:03 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:50.531 22:15:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:50.531 22:15:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.531 22:15:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:50.531 ************************************ 00:03:50.531 START TEST custom_alloc 00:03:50.531 ************************************ 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.531 22:15:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:50.790 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:51.054 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:51.054 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9178164 kB' 'MemAvailable: 10561040 kB' 'Buffers: 2436 kB' 'Cached: 1597224 kB' 'SwapCached: 0 kB' 'Active: 448032 kB' 'Inactive: 1268368 kB' 'Active(anon): 127204 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 118312 kB' 'Mapped: 48032 kB' 'Shmem: 10464 kB' 'KReclaimable: 61236 kB' 'Slab: 134896 kB' 'SReclaimable: 61236 kB' 'SUnreclaim: 73660 kB' 'KernelStack: 6068 kB' 'PageTables: 3624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 336200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.054 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:51.055 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9178164 kB' 'MemAvailable: 10561040 kB' 'Buffers: 2436 kB' 'Cached: 1597224 kB' 'SwapCached: 0 kB' 'Active: 447788 kB' 'Inactive: 1268368 kB' 'Active(anon): 126960 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 118072 kB' 'Mapped: 47940 kB' 'Shmem: 10464 kB' 'KReclaimable: 61236 kB' 'Slab: 134896 kB' 'SReclaimable: 61236 kB' 'SUnreclaim: 73660 kB' 'KernelStack: 6112 kB' 'PageTables: 3720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 336200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.056 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9178164 kB' 'MemAvailable: 10561040 kB' 'Buffers: 2436 kB' 'Cached: 1597224 kB' 'SwapCached: 0 kB' 'Active: 447924 kB' 'Inactive: 1268368 kB' 'Active(anon): 127096 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 118232 kB' 'Mapped: 47940 kB' 'Shmem: 10464 kB' 'KReclaimable: 61236 kB' 'Slab: 134896 kB' 'SReclaimable: 61236 kB' 'SUnreclaim: 73660 kB' 'KernelStack: 6112 kB' 'PageTables: 3720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 336200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.057 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.058 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:51.059 nr_hugepages=512 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:51.059 resv_hugepages=0 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:51.059 surplus_hugepages=0 00:03:51.059 anon_hugepages=0 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9177912 kB' 'MemAvailable: 10560788 kB' 'Buffers: 2436 kB' 'Cached: 1597224 kB' 'SwapCached: 0 kB' 'Active: 448112 kB' 'Inactive: 1268368 kB' 'Active(anon): 127284 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 118396 kB' 'Mapped: 47940 kB' 'Shmem: 10464 kB' 'KReclaimable: 61236 kB' 'Slab: 134892 kB' 'SReclaimable: 61236 kB' 'SUnreclaim: 73656 kB' 'KernelStack: 6112 kB' 'PageTables: 3720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 336200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.059 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.060 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9178172 kB' 'MemUsed: 3063804 kB' 'SwapCached: 0 kB' 'Active: 447788 kB' 'Inactive: 1268368 kB' 'Active(anon): 126960 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'FilePages: 1599660 kB' 'Mapped: 47940 kB' 'AnonPages: 118120 kB' 'Shmem: 10464 kB' 'KernelStack: 6176 kB' 'PageTables: 3920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61236 kB' 'Slab: 134892 kB' 'SReclaimable: 61236 kB' 'SUnreclaim: 73656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.061 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:51.062 node0=512 expecting 512 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:51.062 00:03:51.062 real 0m0.733s 00:03:51.062 user 0m0.319s 00:03:51.062 sys 0m0.454s 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:51.062 22:15:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:51.062 ************************************ 00:03:51.062 END TEST custom_alloc 00:03:51.062 ************************************ 00:03:51.322 22:15:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:51.322 22:15:04 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:51.322 22:15:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.322 22:15:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.322 22:15:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:51.322 ************************************ 00:03:51.322 START TEST no_shrink_alloc 00:03:51.322 ************************************ 00:03:51.322 22:15:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:51.322 22:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:51.322 22:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:51.322 22:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:51.322 22:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:51.322 22:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:51.322 22:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:51.322 22:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:51.322 22:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:51.322 22:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:51.322 22:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:51.322 22:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:51.322 22:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:51.322 22:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:51.322 22:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:51.322 22:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:51.322 22:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:51.322 22:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:51.322 22:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:51.322 22:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:51.322 22:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:51.322 22:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.322 22:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:51.893 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:51.893 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:51.893 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:51.893 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:51.893 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:51.893 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:51.893 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:51.893 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:51.893 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:51.893 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:51.893 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:51.893 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:51.893 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:51.893 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:51.893 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:51.893 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.893 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.893 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.893 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.893 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8129392 kB' 'MemAvailable: 9512268 kB' 'Buffers: 2436 kB' 'Cached: 1597224 kB' 'SwapCached: 0 kB' 'Active: 448400 kB' 'Inactive: 1268368 kB' 'Active(anon): 127572 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118360 kB' 'Mapped: 48036 kB' 'Shmem: 10464 kB' 'KReclaimable: 61236 kB' 'Slab: 134876 kB' 'SReclaimable: 61236 kB' 'SUnreclaim: 73640 kB' 'KernelStack: 6124 kB' 'PageTables: 3672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.894 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.895 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8129392 kB' 'MemAvailable: 9512268 kB' 'Buffers: 2436 kB' 'Cached: 1597224 kB' 'SwapCached: 0 kB' 'Active: 448168 kB' 'Inactive: 1268368 kB' 'Active(anon): 127340 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 118424 kB' 'Mapped: 47928 kB' 'Shmem: 10464 kB' 'KReclaimable: 61236 kB' 'Slab: 134864 kB' 'SReclaimable: 61236 kB' 'SUnreclaim: 73628 kB' 'KernelStack: 6108 kB' 'PageTables: 3612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.896 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.897 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8129392 kB' 'MemAvailable: 9512268 kB' 'Buffers: 2436 kB' 'Cached: 1597224 kB' 'SwapCached: 0 kB' 'Active: 448184 kB' 'Inactive: 1268368 kB' 'Active(anon): 127356 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 118428 kB' 'Mapped: 47928 kB' 'Shmem: 10464 kB' 'KReclaimable: 61236 kB' 'Slab: 134864 kB' 'SReclaimable: 61236 kB' 'SUnreclaim: 73628 kB' 'KernelStack: 6108 kB' 'PageTables: 3612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.898 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.899 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.900 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:51.901 nr_hugepages=1024 00:03:51.901 resv_hugepages=0 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:51.901 surplus_hugepages=0 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:51.901 anon_hugepages=0 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8129392 kB' 'MemAvailable: 9512268 kB' 'Buffers: 2436 kB' 'Cached: 1597224 kB' 'SwapCached: 0 kB' 'Active: 448152 kB' 'Inactive: 1268368 kB' 'Active(anon): 127324 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 118428 kB' 'Mapped: 47928 kB' 'Shmem: 10464 kB' 'KReclaimable: 61236 kB' 'Slab: 134864 kB' 'SReclaimable: 61236 kB' 'SUnreclaim: 73628 kB' 'KernelStack: 6108 kB' 'PageTables: 3612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.901 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.902 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.903 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8129392 kB' 'MemUsed: 4112584 kB' 'SwapCached: 0 kB' 'Active: 448100 kB' 'Inactive: 1268368 kB' 'Active(anon): 127272 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'FilePages: 1599660 kB' 'Mapped: 47928 kB' 'AnonPages: 118320 kB' 'Shmem: 10464 kB' 'KernelStack: 6092 kB' 'PageTables: 3564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61236 kB' 'Slab: 134864 kB' 'SReclaimable: 61236 kB' 'SUnreclaim: 73628 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.904 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:51.905 node0=1024 expecting 1024 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:51.905 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.906 22:15:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:52.475 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:52.475 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:52.475 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:52.475 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8132048 kB' 'MemAvailable: 9514920 kB' 'Buffers: 2436 kB' 'Cached: 1597220 kB' 'SwapCached: 0 kB' 'Active: 448308 kB' 'Inactive: 1268364 kB' 'Active(anon): 127480 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 118632 kB' 'Mapped: 48128 kB' 'Shmem: 10464 kB' 'KReclaimable: 61236 kB' 'Slab: 134848 kB' 'SReclaimable: 61236 kB' 'SUnreclaim: 73612 kB' 'KernelStack: 6156 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8132048 kB' 'MemAvailable: 9514920 kB' 'Buffers: 2436 kB' 'Cached: 1597220 kB' 'SwapCached: 0 kB' 'Active: 447776 kB' 'Inactive: 1268364 kB' 'Active(anon): 126948 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 118048 kB' 'Mapped: 48000 kB' 'Shmem: 10464 kB' 'KReclaimable: 61236 kB' 'Slab: 134848 kB' 'SReclaimable: 61236 kB' 'SUnreclaim: 73612 kB' 'KernelStack: 6152 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.478 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8132048 kB' 'MemAvailable: 9514920 kB' 'Buffers: 2436 kB' 'Cached: 1597220 kB' 'SwapCached: 0 kB' 'Active: 447768 kB' 'Inactive: 1268364 kB' 'Active(anon): 126940 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 118044 kB' 'Mapped: 48000 kB' 'Shmem: 10464 kB' 'KReclaimable: 61236 kB' 'Slab: 134848 kB' 'SReclaimable: 61236 kB' 'SUnreclaim: 73612 kB' 'KernelStack: 6168 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.480 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.481 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.481 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.481 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.481 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.481 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.481 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.481 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.481 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.481 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.481 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.481 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.481 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.481 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.481 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.481 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.481 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.481 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.481 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.481 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.481 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.481 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.481 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.481 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.481 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.481 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.741 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:52.742 nr_hugepages=1024 00:03:52.742 resv_hugepages=0 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.742 surplus_hugepages=0 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.742 anon_hugepages=0 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8132048 kB' 'MemAvailable: 9514920 kB' 'Buffers: 2436 kB' 'Cached: 1597220 kB' 'SwapCached: 0 kB' 'Active: 447980 kB' 'Inactive: 1268364 kB' 'Active(anon): 127152 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 118260 kB' 'Mapped: 48000 kB' 'Shmem: 10464 kB' 'KReclaimable: 61236 kB' 'Slab: 134848 kB' 'SReclaimable: 61236 kB' 'SUnreclaim: 73612 kB' 'KernelStack: 6152 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.743 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8132048 kB' 'MemUsed: 4109928 kB' 'SwapCached: 0 kB' 'Active: 448204 kB' 'Inactive: 1268364 kB' 'Active(anon): 127376 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1268364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1599656 kB' 'Mapped: 48000 kB' 'AnonPages: 118488 kB' 'Shmem: 10464 kB' 'KernelStack: 6168 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61236 kB' 'Slab: 134848 kB' 'SReclaimable: 61236 kB' 'SUnreclaim: 73612 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.744 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:52.745 node0=1024 expecting 1024 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:52.745 00:03:52.745 real 0m1.458s 00:03:52.745 user 0m0.661s 00:03:52.745 sys 0m0.898s 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.745 22:15:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:52.745 ************************************ 00:03:52.745 END TEST no_shrink_alloc 00:03:52.745 ************************************ 00:03:52.745 22:15:06 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:52.745 22:15:06 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:52.745 22:15:06 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:52.745 22:15:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:52.745 22:15:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:52.745 22:15:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:52.745 22:15:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:52.745 22:15:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:52.745 22:15:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:52.745 22:15:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:52.745 00:03:52.745 real 0m6.240s 00:03:52.745 user 0m2.750s 00:03:52.745 sys 0m3.724s 00:03:52.745 22:15:06 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.745 ************************************ 00:03:52.745 END TEST hugepages 00:03:52.745 ************************************ 00:03:52.745 22:15:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.745 22:15:06 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:52.745 22:15:06 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:52.745 22:15:06 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.745 22:15:06 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.745 22:15:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:52.745 ************************************ 00:03:52.745 START TEST driver 00:03:52.745 ************************************ 00:03:52.745 22:15:06 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:53.004 * Looking for test storage... 00:03:53.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:53.004 22:15:06 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:53.004 22:15:06 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:53.004 22:15:06 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:53.570 22:15:07 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:53.570 22:15:07 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.570 22:15:07 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.570 22:15:07 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:53.570 ************************************ 00:03:53.570 START TEST guess_driver 00:03:53.570 ************************************ 00:03:53.570 22:15:07 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:53.570 22:15:07 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:53.570 22:15:07 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:53.570 22:15:07 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:53.570 22:15:07 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:53.570 22:15:07 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:53.570 22:15:07 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:53.570 22:15:07 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:53.829 22:15:07 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:53.829 22:15:07 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:53.829 22:15:07 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:53.829 22:15:07 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:03:53.829 22:15:07 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:03:53.829 22:15:07 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:53.829 22:15:07 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:53.829 22:15:07 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:53.829 22:15:07 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:53.829 22:15:07 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:53.829 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:53.829 22:15:07 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:53.829 22:15:07 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:53.829 22:15:07 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:53.829 Looking for driver=uio_pci_generic 00:03:53.829 22:15:07 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:53.829 22:15:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.829 22:15:07 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:53.829 22:15:07 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.829 22:15:07 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:54.396 22:15:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:54.396 22:15:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:03:54.396 22:15:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.654 22:15:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.654 22:15:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:54.654 22:15:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.654 22:15:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.654 22:15:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:54.654 22:15:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.654 22:15:08 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:54.654 22:15:08 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:54.654 22:15:08 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:54.654 22:15:08 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:55.591 00:03:55.591 real 0m1.895s 00:03:55.591 user 0m0.661s 00:03:55.591 sys 0m1.285s 00:03:55.591 22:15:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.591 22:15:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:55.591 ************************************ 00:03:55.591 END TEST guess_driver 00:03:55.591 ************************************ 00:03:55.591 22:15:09 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:55.591 ************************************ 00:03:55.591 END TEST driver 00:03:55.591 ************************************ 00:03:55.591 00:03:55.591 real 0m2.860s 00:03:55.591 user 0m0.985s 00:03:55.591 sys 0m2.011s 00:03:55.591 22:15:09 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.591 22:15:09 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:55.591 22:15:09 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:55.591 22:15:09 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:55.591 22:15:09 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.591 22:15:09 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.591 22:15:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:55.591 ************************************ 00:03:55.591 START TEST devices 00:03:55.591 ************************************ 00:03:55.591 22:15:09 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:55.852 * Looking for test storage... 00:03:55.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:55.852 22:15:09 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:55.852 22:15:09 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:55.852 22:15:09 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:55.852 22:15:09 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:56.789 22:15:10 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:56.789 22:15:10 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:56.789 22:15:10 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:56.789 22:15:10 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:56.789 22:15:10 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:56.789 22:15:10 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:56.789 22:15:10 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:56.789 22:15:10 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:56.789 22:15:10 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:56.789 22:15:10 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:56.789 22:15:10 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:03:56.789 22:15:10 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:03:56.789 22:15:10 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:56.789 22:15:10 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:56.789 22:15:10 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:56.789 22:15:10 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:03:56.789 22:15:10 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:03:56.789 22:15:10 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:03:56.789 22:15:10 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:56.789 22:15:10 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:56.789 22:15:10 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:56.789 22:15:10 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:56.789 22:15:10 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:56.789 22:15:10 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:56.789 22:15:10 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:56.789 22:15:10 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:56.789 22:15:10 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:56.789 22:15:10 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:56.789 22:15:10 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:56.789 22:15:10 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:56.789 22:15:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:56.789 22:15:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:56.789 22:15:10 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:56.789 22:15:10 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:56.789 22:15:10 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:56.789 22:15:10 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:56.789 22:15:10 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:56.789 No valid GPT data, bailing 00:03:56.789 22:15:10 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:56.789 22:15:10 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:56.789 22:15:10 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:56.789 22:15:10 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:56.789 22:15:10 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:56.789 22:15:10 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:56.789 22:15:10 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:56.789 22:15:10 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:56.789 22:15:10 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:56.789 22:15:10 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:56.789 22:15:10 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:56.789 22:15:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:03:56.789 22:15:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:56.789 22:15:10 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:56.789 22:15:10 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:56.789 22:15:10 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:03:56.789 22:15:10 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:03:56.789 22:15:10 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:03:57.085 No valid GPT data, bailing 00:03:57.085 22:15:10 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:03:57.085 22:15:10 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:57.085 22:15:10 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:03:57.085 22:15:10 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:03:57.085 22:15:10 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:03:57.085 22:15:10 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:03:57.085 22:15:10 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:03:57.085 22:15:10 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:03:57.085 No valid GPT data, bailing 00:03:57.085 22:15:10 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:03:57.085 22:15:10 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:57.085 22:15:10 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:03:57.085 22:15:10 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:03:57.085 22:15:10 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:03:57.085 22:15:10 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:57.085 22:15:10 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:57.085 22:15:10 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:57.085 No valid GPT data, bailing 00:03:57.085 22:15:10 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:57.085 22:15:10 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:57.085 22:15:10 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:57.085 22:15:10 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:57.085 22:15:10 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:57.085 22:15:10 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:57.085 22:15:10 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:57.085 22:15:10 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.085 22:15:10 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.085 22:15:10 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:57.085 ************************************ 00:03:57.085 START TEST nvme_mount 00:03:57.086 ************************************ 00:03:57.086 22:15:10 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:57.086 22:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:57.086 22:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:57.086 22:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:57.086 22:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:57.086 22:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:57.086 22:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:57.086 22:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:57.086 22:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:57.086 22:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:57.086 22:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:57.086 22:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:57.086 22:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:57.086 22:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:57.086 22:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:57.086 22:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:57.086 22:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:57.086 22:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:57.086 22:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:57.086 22:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:58.463 Creating new GPT entries in memory. 00:03:58.463 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:58.463 other utilities. 00:03:58.463 22:15:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:58.463 22:15:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:58.463 22:15:11 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:58.463 22:15:11 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:58.463 22:15:11 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:59.414 Creating new GPT entries in memory. 00:03:59.414 The operation has completed successfully. 00:03:59.414 22:15:12 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:59.414 22:15:12 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:59.414 22:15:12 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57039 00:03:59.414 22:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:59.414 22:15:12 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:59.414 22:15:12 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:59.414 22:15:12 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:59.414 22:15:12 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:59.414 22:15:12 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:59.414 22:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:59.414 22:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:59.414 22:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:59.414 22:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:59.414 22:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:59.414 22:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:59.414 22:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:59.414 22:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:59.414 22:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:59.414 22:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.414 22:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:59.414 22:15:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:59.414 22:15:12 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.414 22:15:12 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:59.673 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:59.673 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:59.673 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:59.673 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.673 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:59.673 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.673 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:59.673 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.932 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:59.932 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.932 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:59.932 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:59.932 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:59.932 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:59.932 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:59.932 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:59.932 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:59.932 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:59.932 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:59.932 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:59.932 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:59.932 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:59.932 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:00.189 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:00.189 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:00.189 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:00.189 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:00.189 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:00.189 22:15:13 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:00.189 22:15:13 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:00.189 22:15:13 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:00.189 22:15:13 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:00.189 22:15:13 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:00.189 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:00.189 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:00.189 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:00.189 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:00.189 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:00.189 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:00.189 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:00.189 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:00.190 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:00.190 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.190 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:00.190 22:15:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:00.190 22:15:13 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.190 22:15:13 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:00.753 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:00.753 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:00.753 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:00.753 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.753 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:00.753 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.753 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:00.753 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.014 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:01.014 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.014 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:01.014 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:01.014 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.014 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:01.014 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:01.014 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.014 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:01.014 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:01.014 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:01.014 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:01.014 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:01.014 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:01.014 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:01.014 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:01.014 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.014 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:01.014 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:01.014 22:15:14 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.014 22:15:14 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:01.582 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:01.582 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:01.582 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:01.582 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.582 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:01.582 22:15:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.582 22:15:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:01.582 22:15:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.582 22:15:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:01.582 22:15:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.840 22:15:15 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:01.840 22:15:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:01.840 22:15:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:01.840 22:15:15 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:01.840 22:15:15 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.840 22:15:15 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:01.840 22:15:15 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:01.840 22:15:15 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:01.840 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:01.840 00:04:01.840 real 0m4.654s 00:04:01.840 user 0m0.890s 00:04:01.840 sys 0m1.510s 00:04:01.840 22:15:15 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.840 ************************************ 00:04:01.840 22:15:15 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:01.840 END TEST nvme_mount 00:04:01.840 ************************************ 00:04:01.840 22:15:15 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:01.840 22:15:15 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:01.840 22:15:15 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.840 22:15:15 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.840 22:15:15 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:01.840 ************************************ 00:04:01.840 START TEST dm_mount 00:04:01.840 ************************************ 00:04:01.840 22:15:15 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:01.840 22:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:01.840 22:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:01.840 22:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:01.840 22:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:01.840 22:15:15 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:01.840 22:15:15 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:01.840 22:15:15 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:01.840 22:15:15 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:01.840 22:15:15 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:01.840 22:15:15 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:01.840 22:15:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:01.840 22:15:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.840 22:15:15 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:01.840 22:15:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:01.840 22:15:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.840 22:15:15 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:01.840 22:15:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:01.840 22:15:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.840 22:15:15 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:01.840 22:15:15 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:01.840 22:15:15 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:03.215 Creating new GPT entries in memory. 00:04:03.215 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:03.215 other utilities. 00:04:03.215 22:15:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:03.215 22:15:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:03.215 22:15:16 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:03.215 22:15:16 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:03.215 22:15:16 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:04.150 Creating new GPT entries in memory. 00:04:04.150 The operation has completed successfully. 00:04:04.150 22:15:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:04.150 22:15:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:04.150 22:15:17 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:04.150 22:15:17 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:04.150 22:15:17 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:05.087 The operation has completed successfully. 00:04:05.087 22:15:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:05.087 22:15:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:05.087 22:15:18 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57483 00:04:05.087 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:05.087 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:05.087 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:05.087 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:05.087 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:05.087 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:05.087 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:05.087 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:05.087 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:05.087 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:05.087 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:05.087 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:05.087 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:05.087 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:05.087 22:15:18 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:05.087 22:15:18 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:05.087 22:15:18 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:05.087 22:15:18 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:05.088 22:15:18 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:05.088 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:05.088 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:05.088 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:05.088 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:05.088 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:05.088 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:05.088 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:05.088 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:05.088 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:05.088 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:05.088 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:05.088 22:15:18 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.088 22:15:18 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:05.088 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.347 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:05.347 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:05.347 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:05.347 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.347 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:05.347 22:15:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.606 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:05.606 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.606 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:05.606 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.864 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:05.864 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:05.864 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:05.864 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:05.864 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:05.864 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:05.864 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:05.864 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:05.864 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:05.864 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:05.864 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:05.864 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:05.864 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:05.864 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:05.864 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.864 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:05.864 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:05.864 22:15:19 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.864 22:15:19 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:06.123 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:06.123 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:06.123 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:06.123 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.123 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:06.123 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.381 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:06.381 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.381 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:06.381 22:15:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.640 22:15:20 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:06.640 22:15:20 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:06.640 22:15:20 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:06.640 22:15:20 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:06.640 22:15:20 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:06.640 22:15:20 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:06.640 22:15:20 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:06.640 22:15:20 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.640 22:15:20 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:06.640 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:06.640 22:15:20 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:06.640 22:15:20 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:06.640 00:04:06.640 real 0m4.737s 00:04:06.640 user 0m0.622s 00:04:06.640 sys 0m1.034s 00:04:06.640 22:15:20 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.640 22:15:20 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:06.640 ************************************ 00:04:06.640 END TEST dm_mount 00:04:06.640 ************************************ 00:04:06.640 22:15:20 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:06.640 22:15:20 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:06.640 22:15:20 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:06.640 22:15:20 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:06.640 22:15:20 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.640 22:15:20 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:06.640 22:15:20 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:06.640 22:15:20 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:06.941 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:06.941 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:06.941 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:06.941 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:06.941 22:15:20 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:06.941 22:15:20 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:06.941 22:15:20 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:06.941 22:15:20 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.941 22:15:20 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:06.941 22:15:20 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:06.941 22:15:20 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:06.941 00:04:06.941 real 0m11.271s 00:04:06.941 user 0m2.250s 00:04:06.941 sys 0m3.413s 00:04:06.941 22:15:20 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.941 22:15:20 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:06.941 ************************************ 00:04:06.942 END TEST devices 00:04:06.942 ************************************ 00:04:06.942 22:15:20 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:06.942 00:04:06.942 real 0m26.852s 00:04:06.942 user 0m8.557s 00:04:06.942 sys 0m13.107s 00:04:06.942 22:15:20 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.942 ************************************ 00:04:06.942 END TEST setup.sh 00:04:06.942 22:15:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:06.942 ************************************ 00:04:07.200 22:15:20 -- common/autotest_common.sh@1142 -- # return 0 00:04:07.200 22:15:20 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:08.136 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:08.136 Hugepages 00:04:08.136 node hugesize free / total 00:04:08.136 node0 1048576kB 0 / 0 00:04:08.136 node0 2048kB 2048 / 2048 00:04:08.136 00:04:08.136 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:08.136 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:08.136 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:08.395 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:08.395 22:15:21 -- spdk/autotest.sh@130 -- # uname -s 00:04:08.395 22:15:21 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:08.395 22:15:21 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:08.395 22:15:21 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:08.961 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:09.219 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.219 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.219 22:15:22 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:10.620 22:15:23 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:10.620 22:15:23 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:10.620 22:15:23 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:10.620 22:15:23 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:10.620 22:15:23 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:10.620 22:15:23 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:10.620 22:15:23 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:10.620 22:15:23 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:10.620 22:15:23 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:10.620 22:15:23 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:10.620 22:15:23 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:10.620 22:15:23 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:10.878 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.878 Waiting for block devices as requested 00:04:11.137 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:11.137 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:11.137 22:15:24 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:11.137 22:15:24 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:11.137 22:15:24 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:11.137 22:15:24 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:11.137 22:15:24 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:11.137 22:15:24 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:11.137 22:15:24 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:11.137 22:15:24 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:11.137 22:15:24 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:11.137 22:15:24 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:11.137 22:15:24 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:11.137 22:15:24 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:11.137 22:15:24 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:11.137 22:15:24 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:11.137 22:15:24 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:11.137 22:15:24 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:11.137 22:15:24 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:11.137 22:15:24 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:11.137 22:15:24 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:11.137 22:15:24 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:11.137 22:15:24 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:11.137 22:15:24 -- common/autotest_common.sh@1557 -- # continue 00:04:11.137 22:15:24 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:11.137 22:15:24 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:11.395 22:15:24 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:11.395 22:15:24 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:11.395 22:15:24 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:11.395 22:15:24 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:11.395 22:15:24 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:11.395 22:15:24 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:11.395 22:15:24 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:11.395 22:15:24 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:11.395 22:15:24 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:11.395 22:15:24 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:11.395 22:15:24 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:11.395 22:15:24 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:11.395 22:15:24 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:11.395 22:15:24 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:11.395 22:15:24 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:11.395 22:15:24 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:11.395 22:15:24 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:11.395 22:15:24 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:11.395 22:15:24 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:11.395 22:15:24 -- common/autotest_common.sh@1557 -- # continue 00:04:11.395 22:15:24 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:11.395 22:15:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:11.395 22:15:24 -- common/autotest_common.sh@10 -- # set +x 00:04:11.395 22:15:24 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:11.395 22:15:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:11.395 22:15:24 -- common/autotest_common.sh@10 -- # set +x 00:04:11.395 22:15:24 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.328 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.328 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.328 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.328 22:15:25 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:12.328 22:15:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:12.328 22:15:25 -- common/autotest_common.sh@10 -- # set +x 00:04:12.586 22:15:25 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:12.586 22:15:25 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:12.586 22:15:25 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:12.586 22:15:25 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:12.586 22:15:25 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:12.586 22:15:25 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:12.586 22:15:25 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:12.586 22:15:25 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:12.586 22:15:25 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:12.586 22:15:25 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:12.586 22:15:25 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:12.586 22:15:26 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:12.586 22:15:26 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:12.586 22:15:26 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:12.586 22:15:26 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:12.586 22:15:26 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:12.586 22:15:26 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:12.586 22:15:26 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:12.586 22:15:26 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:12.586 22:15:26 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:12.586 22:15:26 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:12.586 22:15:26 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:12.586 22:15:26 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:12.586 22:15:26 -- common/autotest_common.sh@1593 -- # return 0 00:04:12.586 22:15:26 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:12.586 22:15:26 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:12.586 22:15:26 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:12.586 22:15:26 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:12.586 22:15:26 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:12.586 22:15:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:12.586 22:15:26 -- common/autotest_common.sh@10 -- # set +x 00:04:12.586 22:15:26 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:04:12.586 22:15:26 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:12.586 22:15:26 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:12.586 22:15:26 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:12.586 22:15:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.586 22:15:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.586 22:15:26 -- common/autotest_common.sh@10 -- # set +x 00:04:12.586 ************************************ 00:04:12.586 START TEST env 00:04:12.586 ************************************ 00:04:12.586 22:15:26 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:12.844 * Looking for test storage... 00:04:12.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:12.844 22:15:26 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:12.844 22:15:26 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.844 22:15:26 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.844 22:15:26 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.844 ************************************ 00:04:12.844 START TEST env_memory 00:04:12.844 ************************************ 00:04:12.844 22:15:26 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:12.844 00:04:12.844 00:04:12.844 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.844 http://cunit.sourceforge.net/ 00:04:12.844 00:04:12.844 00:04:12.844 Suite: memory 00:04:12.844 Test: alloc and free memory map ...[2024-07-15 22:15:26.315121] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:12.844 passed 00:04:12.844 Test: mem map translation ...[2024-07-15 22:15:26.336065] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:12.844 [2024-07-15 22:15:26.336120] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:12.844 [2024-07-15 22:15:26.336160] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:12.844 [2024-07-15 22:15:26.336170] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:12.844 passed 00:04:12.844 Test: mem map registration ...[2024-07-15 22:15:26.374658] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:12.844 [2024-07-15 22:15:26.374724] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:12.844 passed 00:04:12.844 Test: mem map adjacent registrations ...passed 00:04:12.844 00:04:12.844 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.844 suites 1 1 n/a 0 0 00:04:12.844 tests 4 4 4 0 0 00:04:12.844 asserts 152 152 152 0 n/a 00:04:12.844 00:04:12.844 Elapsed time = 0.142 seconds 00:04:12.844 00:04:12.844 real 0m0.164s 00:04:12.844 user 0m0.140s 00:04:12.844 sys 0m0.019s 00:04:12.844 22:15:26 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.844 22:15:26 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:12.844 ************************************ 00:04:12.844 END TEST env_memory 00:04:12.844 ************************************ 00:04:13.102 22:15:26 env -- common/autotest_common.sh@1142 -- # return 0 00:04:13.102 22:15:26 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:13.102 22:15:26 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.102 22:15:26 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.102 22:15:26 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.102 ************************************ 00:04:13.102 START TEST env_vtophys 00:04:13.102 ************************************ 00:04:13.102 22:15:26 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:13.102 EAL: lib.eal log level changed from notice to debug 00:04:13.102 EAL: Detected lcore 0 as core 0 on socket 0 00:04:13.102 EAL: Detected lcore 1 as core 0 on socket 0 00:04:13.102 EAL: Detected lcore 2 as core 0 on socket 0 00:04:13.102 EAL: Detected lcore 3 as core 0 on socket 0 00:04:13.102 EAL: Detected lcore 4 as core 0 on socket 0 00:04:13.102 EAL: Detected lcore 5 as core 0 on socket 0 00:04:13.102 EAL: Detected lcore 6 as core 0 on socket 0 00:04:13.102 EAL: Detected lcore 7 as core 0 on socket 0 00:04:13.102 EAL: Detected lcore 8 as core 0 on socket 0 00:04:13.102 EAL: Detected lcore 9 as core 0 on socket 0 00:04:13.102 EAL: Maximum logical cores by configuration: 128 00:04:13.102 EAL: Detected CPU lcores: 10 00:04:13.102 EAL: Detected NUMA nodes: 1 00:04:13.102 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:13.102 EAL: Detected shared linkage of DPDK 00:04:13.102 EAL: No shared files mode enabled, IPC will be disabled 00:04:13.102 EAL: Selected IOVA mode 'PA' 00:04:13.102 EAL: Probing VFIO support... 00:04:13.102 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:13.102 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:13.102 EAL: Ask a virtual area of 0x2e000 bytes 00:04:13.102 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:13.102 EAL: Setting up physically contiguous memory... 00:04:13.102 EAL: Setting maximum number of open files to 524288 00:04:13.102 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:13.102 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:13.102 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.102 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:13.102 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:13.102 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.102 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:13.102 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:13.102 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.102 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:13.102 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:13.102 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.102 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:13.102 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:13.102 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.102 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:13.102 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:13.102 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.102 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:13.102 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:13.102 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.102 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:13.102 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:13.102 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.102 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:13.102 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:13.102 EAL: Hugepages will be freed exactly as allocated. 00:04:13.102 EAL: No shared files mode enabled, IPC is disabled 00:04:13.102 EAL: No shared files mode enabled, IPC is disabled 00:04:13.102 EAL: TSC frequency is ~2490000 KHz 00:04:13.102 EAL: Main lcore 0 is ready (tid=7f0ba3c9ba00;cpuset=[0]) 00:04:13.102 EAL: Trying to obtain current memory policy. 00:04:13.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.102 EAL: Restoring previous memory policy: 0 00:04:13.102 EAL: request: mp_malloc_sync 00:04:13.102 EAL: No shared files mode enabled, IPC is disabled 00:04:13.102 EAL: Heap on socket 0 was expanded by 2MB 00:04:13.102 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:13.102 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:13.102 EAL: Mem event callback 'spdk:(nil)' registered 00:04:13.102 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:13.102 00:04:13.102 00:04:13.102 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.102 http://cunit.sourceforge.net/ 00:04:13.102 00:04:13.102 00:04:13.102 Suite: components_suite 00:04:13.102 Test: vtophys_malloc_test ...passed 00:04:13.102 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:13.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.102 EAL: Restoring previous memory policy: 4 00:04:13.102 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.102 EAL: request: mp_malloc_sync 00:04:13.102 EAL: No shared files mode enabled, IPC is disabled 00:04:13.102 EAL: Heap on socket 0 was expanded by 4MB 00:04:13.102 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.102 EAL: request: mp_malloc_sync 00:04:13.102 EAL: No shared files mode enabled, IPC is disabled 00:04:13.102 EAL: Heap on socket 0 was shrunk by 4MB 00:04:13.102 EAL: Trying to obtain current memory policy. 00:04:13.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.102 EAL: Restoring previous memory policy: 4 00:04:13.102 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.102 EAL: request: mp_malloc_sync 00:04:13.102 EAL: No shared files mode enabled, IPC is disabled 00:04:13.102 EAL: Heap on socket 0 was expanded by 6MB 00:04:13.102 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.102 EAL: request: mp_malloc_sync 00:04:13.102 EAL: No shared files mode enabled, IPC is disabled 00:04:13.102 EAL: Heap on socket 0 was shrunk by 6MB 00:04:13.102 EAL: Trying to obtain current memory policy. 00:04:13.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.102 EAL: Restoring previous memory policy: 4 00:04:13.102 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.102 EAL: request: mp_malloc_sync 00:04:13.102 EAL: No shared files mode enabled, IPC is disabled 00:04:13.102 EAL: Heap on socket 0 was expanded by 10MB 00:04:13.102 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.102 EAL: request: mp_malloc_sync 00:04:13.102 EAL: No shared files mode enabled, IPC is disabled 00:04:13.102 EAL: Heap on socket 0 was shrunk by 10MB 00:04:13.102 EAL: Trying to obtain current memory policy. 00:04:13.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.102 EAL: Restoring previous memory policy: 4 00:04:13.102 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.102 EAL: request: mp_malloc_sync 00:04:13.102 EAL: No shared files mode enabled, IPC is disabled 00:04:13.102 EAL: Heap on socket 0 was expanded by 18MB 00:04:13.102 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.102 EAL: request: mp_malloc_sync 00:04:13.102 EAL: No shared files mode enabled, IPC is disabled 00:04:13.102 EAL: Heap on socket 0 was shrunk by 18MB 00:04:13.102 EAL: Trying to obtain current memory policy. 00:04:13.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.102 EAL: Restoring previous memory policy: 4 00:04:13.102 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.102 EAL: request: mp_malloc_sync 00:04:13.102 EAL: No shared files mode enabled, IPC is disabled 00:04:13.102 EAL: Heap on socket 0 was expanded by 34MB 00:04:13.102 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.102 EAL: request: mp_malloc_sync 00:04:13.102 EAL: No shared files mode enabled, IPC is disabled 00:04:13.103 EAL: Heap on socket 0 was shrunk by 34MB 00:04:13.103 EAL: Trying to obtain current memory policy. 00:04:13.103 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.103 EAL: Restoring previous memory policy: 4 00:04:13.103 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.103 EAL: request: mp_malloc_sync 00:04:13.103 EAL: No shared files mode enabled, IPC is disabled 00:04:13.103 EAL: Heap on socket 0 was expanded by 66MB 00:04:13.103 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.103 EAL: request: mp_malloc_sync 00:04:13.103 EAL: No shared files mode enabled, IPC is disabled 00:04:13.103 EAL: Heap on socket 0 was shrunk by 66MB 00:04:13.103 EAL: Trying to obtain current memory policy. 00:04:13.103 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.361 EAL: Restoring previous memory policy: 4 00:04:13.361 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.361 EAL: request: mp_malloc_sync 00:04:13.361 EAL: No shared files mode enabled, IPC is disabled 00:04:13.361 EAL: Heap on socket 0 was expanded by 130MB 00:04:13.361 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.361 EAL: request: mp_malloc_sync 00:04:13.361 EAL: No shared files mode enabled, IPC is disabled 00:04:13.361 EAL: Heap on socket 0 was shrunk by 130MB 00:04:13.361 EAL: Trying to obtain current memory policy. 00:04:13.361 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.361 EAL: Restoring previous memory policy: 4 00:04:13.361 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.361 EAL: request: mp_malloc_sync 00:04:13.361 EAL: No shared files mode enabled, IPC is disabled 00:04:13.361 EAL: Heap on socket 0 was expanded by 258MB 00:04:13.361 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.361 EAL: request: mp_malloc_sync 00:04:13.361 EAL: No shared files mode enabled, IPC is disabled 00:04:13.361 EAL: Heap on socket 0 was shrunk by 258MB 00:04:13.361 EAL: Trying to obtain current memory policy. 00:04:13.361 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.620 EAL: Restoring previous memory policy: 4 00:04:13.620 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.620 EAL: request: mp_malloc_sync 00:04:13.620 EAL: No shared files mode enabled, IPC is disabled 00:04:13.620 EAL: Heap on socket 0 was expanded by 514MB 00:04:13.620 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.620 EAL: request: mp_malloc_sync 00:04:13.620 EAL: No shared files mode enabled, IPC is disabled 00:04:13.620 EAL: Heap on socket 0 was shrunk by 514MB 00:04:13.620 EAL: Trying to obtain current memory policy. 00:04:13.620 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.878 EAL: Restoring previous memory policy: 4 00:04:13.878 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.878 EAL: request: mp_malloc_sync 00:04:13.878 EAL: No shared files mode enabled, IPC is disabled 00:04:13.878 EAL: Heap on socket 0 was expanded by 1026MB 00:04:14.136 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.136 passed 00:04:14.136 00:04:14.136 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.136 suites 1 1 n/a 0 0 00:04:14.136 tests 2 2 2 0 0 00:04:14.136 asserts 5246 5246 5246 0 n/a 00:04:14.136 00:04:14.136 Elapsed time = 1.037 seconds 00:04:14.136 EAL: request: mp_malloc_sync 00:04:14.136 EAL: No shared files mode enabled, IPC is disabled 00:04:14.136 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:14.136 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.136 EAL: request: mp_malloc_sync 00:04:14.136 EAL: No shared files mode enabled, IPC is disabled 00:04:14.136 EAL: Heap on socket 0 was shrunk by 2MB 00:04:14.136 EAL: No shared files mode enabled, IPC is disabled 00:04:14.136 EAL: No shared files mode enabled, IPC is disabled 00:04:14.136 EAL: No shared files mode enabled, IPC is disabled 00:04:14.136 00:04:14.136 real 0m1.235s 00:04:14.136 user 0m0.656s 00:04:14.136 sys 0m0.452s 00:04:14.136 22:15:27 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.136 22:15:27 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:14.136 ************************************ 00:04:14.136 END TEST env_vtophys 00:04:14.136 ************************************ 00:04:14.394 22:15:27 env -- common/autotest_common.sh@1142 -- # return 0 00:04:14.394 22:15:27 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:14.394 22:15:27 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.394 22:15:27 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.394 22:15:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.394 ************************************ 00:04:14.394 START TEST env_pci 00:04:14.394 ************************************ 00:04:14.394 22:15:27 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:14.394 00:04:14.394 00:04:14.394 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.394 http://cunit.sourceforge.net/ 00:04:14.394 00:04:14.394 00:04:14.394 Suite: pci 00:04:14.394 Test: pci_hook ...[2024-07-15 22:15:27.829244] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58687 has claimed it 00:04:14.394 passed 00:04:14.394 00:04:14.394 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.394 suites 1 1 n/a 0 0 00:04:14.394 tests 1 1 1 0 0 00:04:14.394 asserts 25 25 25 0 n/a 00:04:14.394 00:04:14.394 Elapsed time = 0.003 seconds 00:04:14.394 EAL: Cannot find device (10000:00:01.0) 00:04:14.394 EAL: Failed to attach device on primary process 00:04:14.394 00:04:14.394 real 0m0.030s 00:04:14.394 user 0m0.016s 00:04:14.394 sys 0m0.013s 00:04:14.394 22:15:27 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.394 22:15:27 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:14.394 ************************************ 00:04:14.394 END TEST env_pci 00:04:14.394 ************************************ 00:04:14.394 22:15:27 env -- common/autotest_common.sh@1142 -- # return 0 00:04:14.394 22:15:27 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:14.394 22:15:27 env -- env/env.sh@15 -- # uname 00:04:14.394 22:15:27 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:14.394 22:15:27 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:14.394 22:15:27 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:14.394 22:15:27 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:14.394 22:15:27 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.394 22:15:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.394 ************************************ 00:04:14.394 START TEST env_dpdk_post_init 00:04:14.394 ************************************ 00:04:14.394 22:15:27 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:14.394 EAL: Detected CPU lcores: 10 00:04:14.394 EAL: Detected NUMA nodes: 1 00:04:14.394 EAL: Detected shared linkage of DPDK 00:04:14.394 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:14.394 EAL: Selected IOVA mode 'PA' 00:04:14.652 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:14.652 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:14.652 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:14.652 Starting DPDK initialization... 00:04:14.652 Starting SPDK post initialization... 00:04:14.652 SPDK NVMe probe 00:04:14.652 Attaching to 0000:00:10.0 00:04:14.652 Attaching to 0000:00:11.0 00:04:14.652 Attached to 0000:00:10.0 00:04:14.652 Attached to 0000:00:11.0 00:04:14.652 Cleaning up... 00:04:14.652 00:04:14.652 real 0m0.194s 00:04:14.652 user 0m0.055s 00:04:14.652 sys 0m0.040s 00:04:14.652 22:15:28 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.652 22:15:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:14.652 ************************************ 00:04:14.652 END TEST env_dpdk_post_init 00:04:14.652 ************************************ 00:04:14.652 22:15:28 env -- common/autotest_common.sh@1142 -- # return 0 00:04:14.652 22:15:28 env -- env/env.sh@26 -- # uname 00:04:14.652 22:15:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:14.652 22:15:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:14.652 22:15:28 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.652 22:15:28 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.652 22:15:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.652 ************************************ 00:04:14.652 START TEST env_mem_callbacks 00:04:14.652 ************************************ 00:04:14.652 22:15:28 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:14.652 EAL: Detected CPU lcores: 10 00:04:14.652 EAL: Detected NUMA nodes: 1 00:04:14.652 EAL: Detected shared linkage of DPDK 00:04:14.652 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:14.652 EAL: Selected IOVA mode 'PA' 00:04:14.935 00:04:14.935 00:04:14.935 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.935 http://cunit.sourceforge.net/ 00:04:14.935 00:04:14.935 00:04:14.935 Suite: memory 00:04:14.935 Test: test ... 00:04:14.935 register 0x200000200000 2097152 00:04:14.935 malloc 3145728 00:04:14.935 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:14.935 register 0x200000400000 4194304 00:04:14.935 buf 0x200000500000 len 3145728 PASSED 00:04:14.935 malloc 64 00:04:14.935 buf 0x2000004fff40 len 64 PASSED 00:04:14.935 malloc 4194304 00:04:14.935 register 0x200000800000 6291456 00:04:14.935 buf 0x200000a00000 len 4194304 PASSED 00:04:14.935 free 0x200000500000 3145728 00:04:14.935 free 0x2000004fff40 64 00:04:14.935 unregister 0x200000400000 4194304 PASSED 00:04:14.935 free 0x200000a00000 4194304 00:04:14.935 unregister 0x200000800000 6291456 PASSED 00:04:14.935 malloc 8388608 00:04:14.935 register 0x200000400000 10485760 00:04:14.935 buf 0x200000600000 len 8388608 PASSED 00:04:14.935 free 0x200000600000 8388608 00:04:14.935 unregister 0x200000400000 10485760 PASSED 00:04:14.935 passed 00:04:14.935 00:04:14.935 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.935 suites 1 1 n/a 0 0 00:04:14.935 tests 1 1 1 0 0 00:04:14.935 asserts 15 15 15 0 n/a 00:04:14.935 00:04:14.935 Elapsed time = 0.007 seconds 00:04:14.935 00:04:14.935 real 0m0.150s 00:04:14.935 user 0m0.020s 00:04:14.935 sys 0m0.029s 00:04:14.935 22:15:28 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.935 22:15:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:14.935 ************************************ 00:04:14.935 END TEST env_mem_callbacks 00:04:14.935 ************************************ 00:04:14.935 22:15:28 env -- common/autotest_common.sh@1142 -- # return 0 00:04:14.935 00:04:14.935 real 0m2.284s 00:04:14.935 user 0m1.060s 00:04:14.935 sys 0m0.896s 00:04:14.935 22:15:28 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.935 22:15:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.935 ************************************ 00:04:14.935 END TEST env 00:04:14.935 ************************************ 00:04:14.935 22:15:28 -- common/autotest_common.sh@1142 -- # return 0 00:04:14.935 22:15:28 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:14.935 22:15:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.935 22:15:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.935 22:15:28 -- common/autotest_common.sh@10 -- # set +x 00:04:14.935 ************************************ 00:04:14.935 START TEST rpc 00:04:14.935 ************************************ 00:04:14.935 22:15:28 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:15.193 * Looking for test storage... 00:04:15.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:15.193 22:15:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58792 00:04:15.193 22:15:28 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:15.193 22:15:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.193 22:15:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58792 00:04:15.193 22:15:28 rpc -- common/autotest_common.sh@829 -- # '[' -z 58792 ']' 00:04:15.193 22:15:28 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.193 22:15:28 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:15.193 22:15:28 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.193 22:15:28 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:15.193 22:15:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.193 [2024-07-15 22:15:28.667222] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:04:15.193 [2024-07-15 22:15:28.667299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58792 ] 00:04:15.193 [2024-07-15 22:15:28.809796] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.452 [2024-07-15 22:15:28.905841] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:15.452 [2024-07-15 22:15:28.905895] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58792' to capture a snapshot of events at runtime. 00:04:15.452 [2024-07-15 22:15:28.905904] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:15.452 [2024-07-15 22:15:28.905913] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:15.452 [2024-07-15 22:15:28.905920] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58792 for offline analysis/debug. 00:04:15.452 [2024-07-15 22:15:28.905982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.452 [2024-07-15 22:15:28.947606] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:16.019 22:15:29 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:16.019 22:15:29 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:16.019 22:15:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:16.019 22:15:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:16.019 22:15:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:16.019 22:15:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:16.019 22:15:29 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.019 22:15:29 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.019 22:15:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.019 ************************************ 00:04:16.019 START TEST rpc_integrity 00:04:16.019 ************************************ 00:04:16.019 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:16.019 22:15:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:16.019 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.019 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.019 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.019 22:15:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:16.019 22:15:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:16.019 22:15:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:16.019 22:15:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:16.019 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.019 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.019 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.019 22:15:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:16.019 22:15:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:16.019 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.019 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.019 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.019 22:15:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:16.019 { 00:04:16.019 "name": "Malloc0", 00:04:16.019 "aliases": [ 00:04:16.019 "ab67def5-3690-4f0b-8cdf-a0664d7777d0" 00:04:16.019 ], 00:04:16.019 "product_name": "Malloc disk", 00:04:16.019 "block_size": 512, 00:04:16.019 "num_blocks": 16384, 00:04:16.019 "uuid": "ab67def5-3690-4f0b-8cdf-a0664d7777d0", 00:04:16.019 "assigned_rate_limits": { 00:04:16.019 "rw_ios_per_sec": 0, 00:04:16.019 "rw_mbytes_per_sec": 0, 00:04:16.019 "r_mbytes_per_sec": 0, 00:04:16.019 "w_mbytes_per_sec": 0 00:04:16.019 }, 00:04:16.019 "claimed": false, 00:04:16.019 "zoned": false, 00:04:16.019 "supported_io_types": { 00:04:16.019 "read": true, 00:04:16.019 "write": true, 00:04:16.019 "unmap": true, 00:04:16.019 "flush": true, 00:04:16.019 "reset": true, 00:04:16.019 "nvme_admin": false, 00:04:16.019 "nvme_io": false, 00:04:16.019 "nvme_io_md": false, 00:04:16.019 "write_zeroes": true, 00:04:16.019 "zcopy": true, 00:04:16.019 "get_zone_info": false, 00:04:16.019 "zone_management": false, 00:04:16.019 "zone_append": false, 00:04:16.019 "compare": false, 00:04:16.019 "compare_and_write": false, 00:04:16.019 "abort": true, 00:04:16.019 "seek_hole": false, 00:04:16.019 "seek_data": false, 00:04:16.019 "copy": true, 00:04:16.019 "nvme_iov_md": false 00:04:16.019 }, 00:04:16.019 "memory_domains": [ 00:04:16.019 { 00:04:16.019 "dma_device_id": "system", 00:04:16.019 "dma_device_type": 1 00:04:16.019 }, 00:04:16.019 { 00:04:16.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.019 "dma_device_type": 2 00:04:16.019 } 00:04:16.019 ], 00:04:16.019 "driver_specific": {} 00:04:16.019 } 00:04:16.019 ]' 00:04:16.019 22:15:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:16.278 22:15:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:16.278 22:15:29 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:16.278 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.278 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.278 [2024-07-15 22:15:29.689726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:16.278 [2024-07-15 22:15:29.689776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:16.278 [2024-07-15 22:15:29.689792] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24454d0 00:04:16.278 [2024-07-15 22:15:29.689801] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:16.278 [2024-07-15 22:15:29.691215] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:16.278 [2024-07-15 22:15:29.691254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:16.278 Passthru0 00:04:16.278 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.278 22:15:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:16.278 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.278 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.278 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.278 22:15:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:16.278 { 00:04:16.278 "name": "Malloc0", 00:04:16.278 "aliases": [ 00:04:16.278 "ab67def5-3690-4f0b-8cdf-a0664d7777d0" 00:04:16.278 ], 00:04:16.278 "product_name": "Malloc disk", 00:04:16.278 "block_size": 512, 00:04:16.278 "num_blocks": 16384, 00:04:16.278 "uuid": "ab67def5-3690-4f0b-8cdf-a0664d7777d0", 00:04:16.278 "assigned_rate_limits": { 00:04:16.278 "rw_ios_per_sec": 0, 00:04:16.278 "rw_mbytes_per_sec": 0, 00:04:16.278 "r_mbytes_per_sec": 0, 00:04:16.278 "w_mbytes_per_sec": 0 00:04:16.278 }, 00:04:16.278 "claimed": true, 00:04:16.278 "claim_type": "exclusive_write", 00:04:16.278 "zoned": false, 00:04:16.278 "supported_io_types": { 00:04:16.278 "read": true, 00:04:16.278 "write": true, 00:04:16.278 "unmap": true, 00:04:16.278 "flush": true, 00:04:16.278 "reset": true, 00:04:16.278 "nvme_admin": false, 00:04:16.278 "nvme_io": false, 00:04:16.278 "nvme_io_md": false, 00:04:16.278 "write_zeroes": true, 00:04:16.278 "zcopy": true, 00:04:16.278 "get_zone_info": false, 00:04:16.278 "zone_management": false, 00:04:16.278 "zone_append": false, 00:04:16.278 "compare": false, 00:04:16.278 "compare_and_write": false, 00:04:16.278 "abort": true, 00:04:16.278 "seek_hole": false, 00:04:16.278 "seek_data": false, 00:04:16.278 "copy": true, 00:04:16.278 "nvme_iov_md": false 00:04:16.278 }, 00:04:16.278 "memory_domains": [ 00:04:16.278 { 00:04:16.278 "dma_device_id": "system", 00:04:16.278 "dma_device_type": 1 00:04:16.278 }, 00:04:16.278 { 00:04:16.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.278 "dma_device_type": 2 00:04:16.278 } 00:04:16.278 ], 00:04:16.278 "driver_specific": {} 00:04:16.278 }, 00:04:16.278 { 00:04:16.278 "name": "Passthru0", 00:04:16.278 "aliases": [ 00:04:16.278 "f2682a3a-ccef-5c32-bd42-a69602f02b61" 00:04:16.278 ], 00:04:16.278 "product_name": "passthru", 00:04:16.278 "block_size": 512, 00:04:16.278 "num_blocks": 16384, 00:04:16.278 "uuid": "f2682a3a-ccef-5c32-bd42-a69602f02b61", 00:04:16.278 "assigned_rate_limits": { 00:04:16.278 "rw_ios_per_sec": 0, 00:04:16.278 "rw_mbytes_per_sec": 0, 00:04:16.278 "r_mbytes_per_sec": 0, 00:04:16.278 "w_mbytes_per_sec": 0 00:04:16.278 }, 00:04:16.278 "claimed": false, 00:04:16.278 "zoned": false, 00:04:16.278 "supported_io_types": { 00:04:16.278 "read": true, 00:04:16.278 "write": true, 00:04:16.278 "unmap": true, 00:04:16.278 "flush": true, 00:04:16.278 "reset": true, 00:04:16.278 "nvme_admin": false, 00:04:16.278 "nvme_io": false, 00:04:16.278 "nvme_io_md": false, 00:04:16.278 "write_zeroes": true, 00:04:16.278 "zcopy": true, 00:04:16.278 "get_zone_info": false, 00:04:16.278 "zone_management": false, 00:04:16.278 "zone_append": false, 00:04:16.278 "compare": false, 00:04:16.278 "compare_and_write": false, 00:04:16.278 "abort": true, 00:04:16.278 "seek_hole": false, 00:04:16.278 "seek_data": false, 00:04:16.278 "copy": true, 00:04:16.278 "nvme_iov_md": false 00:04:16.278 }, 00:04:16.278 "memory_domains": [ 00:04:16.278 { 00:04:16.278 "dma_device_id": "system", 00:04:16.278 "dma_device_type": 1 00:04:16.278 }, 00:04:16.278 { 00:04:16.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.278 "dma_device_type": 2 00:04:16.278 } 00:04:16.278 ], 00:04:16.278 "driver_specific": { 00:04:16.278 "passthru": { 00:04:16.278 "name": "Passthru0", 00:04:16.278 "base_bdev_name": "Malloc0" 00:04:16.278 } 00:04:16.278 } 00:04:16.278 } 00:04:16.278 ]' 00:04:16.278 22:15:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:16.278 22:15:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:16.278 22:15:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:16.278 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.278 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.278 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.278 22:15:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:16.278 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.278 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.278 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.278 22:15:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:16.278 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.278 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.278 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.278 22:15:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:16.278 22:15:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:16.278 22:15:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:16.278 00:04:16.278 real 0m0.326s 00:04:16.278 user 0m0.197s 00:04:16.278 sys 0m0.063s 00:04:16.278 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.278 22:15:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.278 ************************************ 00:04:16.278 END TEST rpc_integrity 00:04:16.278 ************************************ 00:04:16.536 22:15:29 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:16.536 22:15:29 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:16.536 22:15:29 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.536 22:15:29 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.536 22:15:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.536 ************************************ 00:04:16.536 START TEST rpc_plugins 00:04:16.536 ************************************ 00:04:16.536 22:15:29 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:16.536 22:15:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:16.536 22:15:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.536 22:15:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.536 22:15:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.536 22:15:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:16.536 22:15:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:16.536 22:15:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.536 22:15:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.536 22:15:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.536 22:15:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:16.536 { 00:04:16.536 "name": "Malloc1", 00:04:16.536 "aliases": [ 00:04:16.536 "07ff41df-9cc7-4dcb-80eb-fc1edcaf6afc" 00:04:16.536 ], 00:04:16.537 "product_name": "Malloc disk", 00:04:16.537 "block_size": 4096, 00:04:16.537 "num_blocks": 256, 00:04:16.537 "uuid": "07ff41df-9cc7-4dcb-80eb-fc1edcaf6afc", 00:04:16.537 "assigned_rate_limits": { 00:04:16.537 "rw_ios_per_sec": 0, 00:04:16.537 "rw_mbytes_per_sec": 0, 00:04:16.537 "r_mbytes_per_sec": 0, 00:04:16.537 "w_mbytes_per_sec": 0 00:04:16.537 }, 00:04:16.537 "claimed": false, 00:04:16.537 "zoned": false, 00:04:16.537 "supported_io_types": { 00:04:16.537 "read": true, 00:04:16.537 "write": true, 00:04:16.537 "unmap": true, 00:04:16.537 "flush": true, 00:04:16.537 "reset": true, 00:04:16.537 "nvme_admin": false, 00:04:16.537 "nvme_io": false, 00:04:16.537 "nvme_io_md": false, 00:04:16.537 "write_zeroes": true, 00:04:16.537 "zcopy": true, 00:04:16.537 "get_zone_info": false, 00:04:16.537 "zone_management": false, 00:04:16.537 "zone_append": false, 00:04:16.537 "compare": false, 00:04:16.537 "compare_and_write": false, 00:04:16.537 "abort": true, 00:04:16.537 "seek_hole": false, 00:04:16.537 "seek_data": false, 00:04:16.537 "copy": true, 00:04:16.537 "nvme_iov_md": false 00:04:16.537 }, 00:04:16.537 "memory_domains": [ 00:04:16.537 { 00:04:16.537 "dma_device_id": "system", 00:04:16.537 "dma_device_type": 1 00:04:16.537 }, 00:04:16.537 { 00:04:16.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.537 "dma_device_type": 2 00:04:16.537 } 00:04:16.537 ], 00:04:16.537 "driver_specific": {} 00:04:16.537 } 00:04:16.537 ]' 00:04:16.537 22:15:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:16.537 22:15:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:16.537 22:15:30 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:16.537 22:15:30 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.537 22:15:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.537 22:15:30 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.537 22:15:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:16.537 22:15:30 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.537 22:15:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.537 22:15:30 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.537 22:15:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:16.537 22:15:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:16.537 22:15:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:16.537 00:04:16.537 real 0m0.164s 00:04:16.537 user 0m0.094s 00:04:16.537 sys 0m0.033s 00:04:16.537 22:15:30 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.537 ************************************ 00:04:16.537 END TEST rpc_plugins 00:04:16.537 ************************************ 00:04:16.537 22:15:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.537 22:15:30 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:16.537 22:15:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:16.537 22:15:30 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.537 22:15:30 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.537 22:15:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.537 ************************************ 00:04:16.537 START TEST rpc_trace_cmd_test 00:04:16.537 ************************************ 00:04:16.537 22:15:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:16.537 22:15:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:16.537 22:15:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:16.537 22:15:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.537 22:15:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:16.795 22:15:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.795 22:15:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:16.795 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58792", 00:04:16.795 "tpoint_group_mask": "0x8", 00:04:16.795 "iscsi_conn": { 00:04:16.795 "mask": "0x2", 00:04:16.795 "tpoint_mask": "0x0" 00:04:16.795 }, 00:04:16.795 "scsi": { 00:04:16.795 "mask": "0x4", 00:04:16.795 "tpoint_mask": "0x0" 00:04:16.795 }, 00:04:16.795 "bdev": { 00:04:16.795 "mask": "0x8", 00:04:16.795 "tpoint_mask": "0xffffffffffffffff" 00:04:16.795 }, 00:04:16.795 "nvmf_rdma": { 00:04:16.795 "mask": "0x10", 00:04:16.795 "tpoint_mask": "0x0" 00:04:16.795 }, 00:04:16.795 "nvmf_tcp": { 00:04:16.795 "mask": "0x20", 00:04:16.795 "tpoint_mask": "0x0" 00:04:16.795 }, 00:04:16.795 "ftl": { 00:04:16.795 "mask": "0x40", 00:04:16.795 "tpoint_mask": "0x0" 00:04:16.795 }, 00:04:16.795 "blobfs": { 00:04:16.795 "mask": "0x80", 00:04:16.795 "tpoint_mask": "0x0" 00:04:16.795 }, 00:04:16.795 "dsa": { 00:04:16.795 "mask": "0x200", 00:04:16.795 "tpoint_mask": "0x0" 00:04:16.795 }, 00:04:16.795 "thread": { 00:04:16.795 "mask": "0x400", 00:04:16.795 "tpoint_mask": "0x0" 00:04:16.795 }, 00:04:16.795 "nvme_pcie": { 00:04:16.795 "mask": "0x800", 00:04:16.795 "tpoint_mask": "0x0" 00:04:16.795 }, 00:04:16.795 "iaa": { 00:04:16.795 "mask": "0x1000", 00:04:16.795 "tpoint_mask": "0x0" 00:04:16.795 }, 00:04:16.795 "nvme_tcp": { 00:04:16.795 "mask": "0x2000", 00:04:16.795 "tpoint_mask": "0x0" 00:04:16.795 }, 00:04:16.795 "bdev_nvme": { 00:04:16.795 "mask": "0x4000", 00:04:16.795 "tpoint_mask": "0x0" 00:04:16.795 }, 00:04:16.795 "sock": { 00:04:16.795 "mask": "0x8000", 00:04:16.795 "tpoint_mask": "0x0" 00:04:16.795 } 00:04:16.795 }' 00:04:16.795 22:15:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:16.795 22:15:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:16.795 22:15:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:16.795 22:15:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:16.795 22:15:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:16.795 22:15:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:16.795 22:15:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:16.795 22:15:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:16.795 22:15:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:16.795 22:15:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:16.795 00:04:16.795 real 0m0.223s 00:04:16.795 user 0m0.173s 00:04:16.795 sys 0m0.042s 00:04:16.795 22:15:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.795 22:15:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:16.795 ************************************ 00:04:16.795 END TEST rpc_trace_cmd_test 00:04:16.795 ************************************ 00:04:17.054 22:15:30 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:17.054 22:15:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:17.054 22:15:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:17.054 22:15:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:17.054 22:15:30 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.054 22:15:30 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.054 22:15:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.054 ************************************ 00:04:17.054 START TEST rpc_daemon_integrity 00:04:17.054 ************************************ 00:04:17.054 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:17.054 22:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:17.054 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.054 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.054 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.054 22:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:17.054 22:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:17.054 22:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:17.054 22:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:17.054 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.054 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.054 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.054 22:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:17.054 22:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:17.054 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.054 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.054 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.054 22:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:17.054 { 00:04:17.054 "name": "Malloc2", 00:04:17.054 "aliases": [ 00:04:17.054 "dc2f90de-6022-481f-b93f-2bd0d57caccd" 00:04:17.054 ], 00:04:17.054 "product_name": "Malloc disk", 00:04:17.054 "block_size": 512, 00:04:17.054 "num_blocks": 16384, 00:04:17.054 "uuid": "dc2f90de-6022-481f-b93f-2bd0d57caccd", 00:04:17.054 "assigned_rate_limits": { 00:04:17.054 "rw_ios_per_sec": 0, 00:04:17.054 "rw_mbytes_per_sec": 0, 00:04:17.054 "r_mbytes_per_sec": 0, 00:04:17.054 "w_mbytes_per_sec": 0 00:04:17.054 }, 00:04:17.054 "claimed": false, 00:04:17.054 "zoned": false, 00:04:17.054 "supported_io_types": { 00:04:17.054 "read": true, 00:04:17.054 "write": true, 00:04:17.054 "unmap": true, 00:04:17.054 "flush": true, 00:04:17.054 "reset": true, 00:04:17.054 "nvme_admin": false, 00:04:17.054 "nvme_io": false, 00:04:17.054 "nvme_io_md": false, 00:04:17.054 "write_zeroes": true, 00:04:17.054 "zcopy": true, 00:04:17.054 "get_zone_info": false, 00:04:17.054 "zone_management": false, 00:04:17.054 "zone_append": false, 00:04:17.054 "compare": false, 00:04:17.054 "compare_and_write": false, 00:04:17.054 "abort": true, 00:04:17.054 "seek_hole": false, 00:04:17.054 "seek_data": false, 00:04:17.054 "copy": true, 00:04:17.054 "nvme_iov_md": false 00:04:17.054 }, 00:04:17.054 "memory_domains": [ 00:04:17.054 { 00:04:17.054 "dma_device_id": "system", 00:04:17.054 "dma_device_type": 1 00:04:17.054 }, 00:04:17.054 { 00:04:17.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.054 "dma_device_type": 2 00:04:17.054 } 00:04:17.054 ], 00:04:17.054 "driver_specific": {} 00:04:17.054 } 00:04:17.054 ]' 00:04:17.054 22:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:17.054 22:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:17.054 22:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:17.054 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.054 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.054 [2024-07-15 22:15:30.604654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:17.054 [2024-07-15 22:15:30.604708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:17.054 [2024-07-15 22:15:30.604724] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24fd6d0 00:04:17.054 [2024-07-15 22:15:30.604732] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:17.054 [2024-07-15 22:15:30.605935] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:17.054 [2024-07-15 22:15:30.605982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:17.054 Passthru0 00:04:17.054 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.054 22:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:17.055 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.055 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.055 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.055 22:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:17.055 { 00:04:17.055 "name": "Malloc2", 00:04:17.055 "aliases": [ 00:04:17.055 "dc2f90de-6022-481f-b93f-2bd0d57caccd" 00:04:17.055 ], 00:04:17.055 "product_name": "Malloc disk", 00:04:17.055 "block_size": 512, 00:04:17.055 "num_blocks": 16384, 00:04:17.055 "uuid": "dc2f90de-6022-481f-b93f-2bd0d57caccd", 00:04:17.055 "assigned_rate_limits": { 00:04:17.055 "rw_ios_per_sec": 0, 00:04:17.055 "rw_mbytes_per_sec": 0, 00:04:17.055 "r_mbytes_per_sec": 0, 00:04:17.055 "w_mbytes_per_sec": 0 00:04:17.055 }, 00:04:17.055 "claimed": true, 00:04:17.055 "claim_type": "exclusive_write", 00:04:17.055 "zoned": false, 00:04:17.055 "supported_io_types": { 00:04:17.055 "read": true, 00:04:17.055 "write": true, 00:04:17.055 "unmap": true, 00:04:17.055 "flush": true, 00:04:17.055 "reset": true, 00:04:17.055 "nvme_admin": false, 00:04:17.055 "nvme_io": false, 00:04:17.055 "nvme_io_md": false, 00:04:17.055 "write_zeroes": true, 00:04:17.055 "zcopy": true, 00:04:17.055 "get_zone_info": false, 00:04:17.055 "zone_management": false, 00:04:17.055 "zone_append": false, 00:04:17.055 "compare": false, 00:04:17.055 "compare_and_write": false, 00:04:17.055 "abort": true, 00:04:17.055 "seek_hole": false, 00:04:17.055 "seek_data": false, 00:04:17.055 "copy": true, 00:04:17.055 "nvme_iov_md": false 00:04:17.055 }, 00:04:17.055 "memory_domains": [ 00:04:17.055 { 00:04:17.055 "dma_device_id": "system", 00:04:17.055 "dma_device_type": 1 00:04:17.055 }, 00:04:17.055 { 00:04:17.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.055 "dma_device_type": 2 00:04:17.055 } 00:04:17.055 ], 00:04:17.055 "driver_specific": {} 00:04:17.055 }, 00:04:17.055 { 00:04:17.055 "name": "Passthru0", 00:04:17.055 "aliases": [ 00:04:17.055 "5bff4ccf-595b-51ec-9d9b-b214d544922c" 00:04:17.055 ], 00:04:17.055 "product_name": "passthru", 00:04:17.055 "block_size": 512, 00:04:17.055 "num_blocks": 16384, 00:04:17.055 "uuid": "5bff4ccf-595b-51ec-9d9b-b214d544922c", 00:04:17.055 "assigned_rate_limits": { 00:04:17.055 "rw_ios_per_sec": 0, 00:04:17.055 "rw_mbytes_per_sec": 0, 00:04:17.055 "r_mbytes_per_sec": 0, 00:04:17.055 "w_mbytes_per_sec": 0 00:04:17.055 }, 00:04:17.055 "claimed": false, 00:04:17.055 "zoned": false, 00:04:17.055 "supported_io_types": { 00:04:17.055 "read": true, 00:04:17.055 "write": true, 00:04:17.055 "unmap": true, 00:04:17.055 "flush": true, 00:04:17.055 "reset": true, 00:04:17.055 "nvme_admin": false, 00:04:17.055 "nvme_io": false, 00:04:17.055 "nvme_io_md": false, 00:04:17.055 "write_zeroes": true, 00:04:17.055 "zcopy": true, 00:04:17.055 "get_zone_info": false, 00:04:17.055 "zone_management": false, 00:04:17.055 "zone_append": false, 00:04:17.055 "compare": false, 00:04:17.055 "compare_and_write": false, 00:04:17.055 "abort": true, 00:04:17.055 "seek_hole": false, 00:04:17.055 "seek_data": false, 00:04:17.055 "copy": true, 00:04:17.055 "nvme_iov_md": false 00:04:17.055 }, 00:04:17.055 "memory_domains": [ 00:04:17.055 { 00:04:17.055 "dma_device_id": "system", 00:04:17.055 "dma_device_type": 1 00:04:17.055 }, 00:04:17.055 { 00:04:17.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.055 "dma_device_type": 2 00:04:17.055 } 00:04:17.055 ], 00:04:17.055 "driver_specific": { 00:04:17.055 "passthru": { 00:04:17.055 "name": "Passthru0", 00:04:17.055 "base_bdev_name": "Malloc2" 00:04:17.055 } 00:04:17.055 } 00:04:17.055 } 00:04:17.055 ]' 00:04:17.055 22:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:17.314 22:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:17.314 22:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:17.314 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.314 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.314 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.314 22:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:17.314 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.314 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.314 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.314 22:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:17.314 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.314 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.314 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.314 22:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:17.314 22:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:17.314 ************************************ 00:04:17.314 END TEST rpc_daemon_integrity 00:04:17.314 ************************************ 00:04:17.314 22:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:17.314 00:04:17.314 real 0m0.324s 00:04:17.314 user 0m0.190s 00:04:17.314 sys 0m0.067s 00:04:17.314 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.314 22:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.314 22:15:30 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:17.314 22:15:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:17.314 22:15:30 rpc -- rpc/rpc.sh@84 -- # killprocess 58792 00:04:17.314 22:15:30 rpc -- common/autotest_common.sh@948 -- # '[' -z 58792 ']' 00:04:17.314 22:15:30 rpc -- common/autotest_common.sh@952 -- # kill -0 58792 00:04:17.314 22:15:30 rpc -- common/autotest_common.sh@953 -- # uname 00:04:17.314 22:15:30 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:17.314 22:15:30 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58792 00:04:17.314 killing process with pid 58792 00:04:17.314 22:15:30 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:17.314 22:15:30 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:17.314 22:15:30 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58792' 00:04:17.314 22:15:30 rpc -- common/autotest_common.sh@967 -- # kill 58792 00:04:17.314 22:15:30 rpc -- common/autotest_common.sh@972 -- # wait 58792 00:04:17.572 00:04:17.572 real 0m2.711s 00:04:17.572 user 0m3.392s 00:04:17.572 sys 0m0.783s 00:04:17.572 22:15:31 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.572 ************************************ 00:04:17.572 END TEST rpc 00:04:17.572 ************************************ 00:04:17.572 22:15:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.830 22:15:31 -- common/autotest_common.sh@1142 -- # return 0 00:04:17.830 22:15:31 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:17.830 22:15:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.830 22:15:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.830 22:15:31 -- common/autotest_common.sh@10 -- # set +x 00:04:17.830 ************************************ 00:04:17.830 START TEST skip_rpc 00:04:17.830 ************************************ 00:04:17.830 22:15:31 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:17.830 * Looking for test storage... 00:04:17.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:17.830 22:15:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:17.830 22:15:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:17.830 22:15:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:17.830 22:15:31 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.830 22:15:31 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.830 22:15:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.830 ************************************ 00:04:17.830 START TEST skip_rpc 00:04:17.830 ************************************ 00:04:17.830 22:15:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:17.830 22:15:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58989 00:04:17.830 22:15:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:17.830 22:15:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.830 22:15:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:18.088 [2024-07-15 22:15:31.469123] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:04:18.088 [2024-07-15 22:15:31.469640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58989 ] 00:04:18.088 [2024-07-15 22:15:31.610639] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.088 [2024-07-15 22:15:31.707391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.346 [2024-07-15 22:15:31.749420] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58989 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 58989 ']' 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 58989 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58989 00:04:23.658 killing process with pid 58989 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58989' 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 58989 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 58989 00:04:23.658 ************************************ 00:04:23.658 END TEST skip_rpc 00:04:23.658 ************************************ 00:04:23.658 00:04:23.658 real 0m5.370s 00:04:23.658 user 0m5.044s 00:04:23.658 sys 0m0.242s 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.658 22:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.658 22:15:36 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:23.658 22:15:36 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:23.658 22:15:36 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.658 22:15:36 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.658 22:15:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.658 ************************************ 00:04:23.658 START TEST skip_rpc_with_json 00:04:23.658 ************************************ 00:04:23.658 22:15:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:23.658 22:15:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:23.658 22:15:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59076 00:04:23.658 22:15:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.658 22:15:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59076 00:04:23.658 22:15:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 59076 ']' 00:04:23.658 22:15:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.658 22:15:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:23.658 22:15:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.658 22:15:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:23.658 22:15:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:23.658 22:15:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:23.658 [2024-07-15 22:15:36.913926] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:04:23.658 [2024-07-15 22:15:36.914008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59076 ] 00:04:23.658 [2024-07-15 22:15:37.058228] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.658 [2024-07-15 22:15:37.157808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.658 [2024-07-15 22:15:37.199979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:24.222 22:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:24.222 22:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:24.222 22:15:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:24.222 22:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.222 22:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:24.222 [2024-07-15 22:15:37.782554] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:24.222 request: 00:04:24.222 { 00:04:24.222 "trtype": "tcp", 00:04:24.222 "method": "nvmf_get_transports", 00:04:24.222 "req_id": 1 00:04:24.222 } 00:04:24.222 Got JSON-RPC error response 00:04:24.222 response: 00:04:24.222 { 00:04:24.222 "code": -19, 00:04:24.222 "message": "No such device" 00:04:24.222 } 00:04:24.222 22:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:24.222 22:15:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:24.222 22:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.222 22:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:24.222 [2024-07-15 22:15:37.794630] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:24.222 22:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.222 22:15:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:24.222 22:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.222 22:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:24.481 22:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.481 22:15:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:24.481 { 00:04:24.481 "subsystems": [ 00:04:24.481 { 00:04:24.481 "subsystem": "keyring", 00:04:24.481 "config": [] 00:04:24.481 }, 00:04:24.481 { 00:04:24.481 "subsystem": "iobuf", 00:04:24.481 "config": [ 00:04:24.481 { 00:04:24.481 "method": "iobuf_set_options", 00:04:24.481 "params": { 00:04:24.481 "small_pool_count": 8192, 00:04:24.481 "large_pool_count": 1024, 00:04:24.481 "small_bufsize": 8192, 00:04:24.481 "large_bufsize": 135168 00:04:24.481 } 00:04:24.481 } 00:04:24.481 ] 00:04:24.481 }, 00:04:24.481 { 00:04:24.481 "subsystem": "sock", 00:04:24.481 "config": [ 00:04:24.481 { 00:04:24.481 "method": "sock_set_default_impl", 00:04:24.481 "params": { 00:04:24.481 "impl_name": "uring" 00:04:24.481 } 00:04:24.481 }, 00:04:24.481 { 00:04:24.481 "method": "sock_impl_set_options", 00:04:24.481 "params": { 00:04:24.481 "impl_name": "ssl", 00:04:24.481 "recv_buf_size": 4096, 00:04:24.481 "send_buf_size": 4096, 00:04:24.481 "enable_recv_pipe": true, 00:04:24.481 "enable_quickack": false, 00:04:24.481 "enable_placement_id": 0, 00:04:24.481 "enable_zerocopy_send_server": true, 00:04:24.481 "enable_zerocopy_send_client": false, 00:04:24.481 "zerocopy_threshold": 0, 00:04:24.481 "tls_version": 0, 00:04:24.481 "enable_ktls": false 00:04:24.481 } 00:04:24.481 }, 00:04:24.481 { 00:04:24.481 "method": "sock_impl_set_options", 00:04:24.481 "params": { 00:04:24.481 "impl_name": "posix", 00:04:24.481 "recv_buf_size": 2097152, 00:04:24.481 "send_buf_size": 2097152, 00:04:24.481 "enable_recv_pipe": true, 00:04:24.481 "enable_quickack": false, 00:04:24.481 "enable_placement_id": 0, 00:04:24.481 "enable_zerocopy_send_server": true, 00:04:24.481 "enable_zerocopy_send_client": false, 00:04:24.481 "zerocopy_threshold": 0, 00:04:24.481 "tls_version": 0, 00:04:24.481 "enable_ktls": false 00:04:24.481 } 00:04:24.481 }, 00:04:24.481 { 00:04:24.481 "method": "sock_impl_set_options", 00:04:24.481 "params": { 00:04:24.481 "impl_name": "uring", 00:04:24.481 "recv_buf_size": 2097152, 00:04:24.481 "send_buf_size": 2097152, 00:04:24.481 "enable_recv_pipe": true, 00:04:24.481 "enable_quickack": false, 00:04:24.481 "enable_placement_id": 0, 00:04:24.481 "enable_zerocopy_send_server": false, 00:04:24.481 "enable_zerocopy_send_client": false, 00:04:24.481 "zerocopy_threshold": 0, 00:04:24.481 "tls_version": 0, 00:04:24.481 "enable_ktls": false 00:04:24.481 } 00:04:24.481 } 00:04:24.481 ] 00:04:24.481 }, 00:04:24.481 { 00:04:24.481 "subsystem": "vmd", 00:04:24.481 "config": [] 00:04:24.481 }, 00:04:24.481 { 00:04:24.481 "subsystem": "accel", 00:04:24.481 "config": [ 00:04:24.481 { 00:04:24.481 "method": "accel_set_options", 00:04:24.481 "params": { 00:04:24.481 "small_cache_size": 128, 00:04:24.481 "large_cache_size": 16, 00:04:24.481 "task_count": 2048, 00:04:24.481 "sequence_count": 2048, 00:04:24.481 "buf_count": 2048 00:04:24.481 } 00:04:24.481 } 00:04:24.481 ] 00:04:24.481 }, 00:04:24.481 { 00:04:24.481 "subsystem": "bdev", 00:04:24.481 "config": [ 00:04:24.481 { 00:04:24.481 "method": "bdev_set_options", 00:04:24.481 "params": { 00:04:24.481 "bdev_io_pool_size": 65535, 00:04:24.481 "bdev_io_cache_size": 256, 00:04:24.481 "bdev_auto_examine": true, 00:04:24.481 "iobuf_small_cache_size": 128, 00:04:24.481 "iobuf_large_cache_size": 16 00:04:24.481 } 00:04:24.481 }, 00:04:24.481 { 00:04:24.481 "method": "bdev_raid_set_options", 00:04:24.481 "params": { 00:04:24.481 "process_window_size_kb": 1024 00:04:24.481 } 00:04:24.481 }, 00:04:24.481 { 00:04:24.481 "method": "bdev_iscsi_set_options", 00:04:24.481 "params": { 00:04:24.481 "timeout_sec": 30 00:04:24.481 } 00:04:24.481 }, 00:04:24.481 { 00:04:24.481 "method": "bdev_nvme_set_options", 00:04:24.481 "params": { 00:04:24.481 "action_on_timeout": "none", 00:04:24.481 "timeout_us": 0, 00:04:24.481 "timeout_admin_us": 0, 00:04:24.481 "keep_alive_timeout_ms": 10000, 00:04:24.481 "arbitration_burst": 0, 00:04:24.481 "low_priority_weight": 0, 00:04:24.481 "medium_priority_weight": 0, 00:04:24.481 "high_priority_weight": 0, 00:04:24.481 "nvme_adminq_poll_period_us": 10000, 00:04:24.481 "nvme_ioq_poll_period_us": 0, 00:04:24.481 "io_queue_requests": 0, 00:04:24.481 "delay_cmd_submit": true, 00:04:24.481 "transport_retry_count": 4, 00:04:24.481 "bdev_retry_count": 3, 00:04:24.481 "transport_ack_timeout": 0, 00:04:24.481 "ctrlr_loss_timeout_sec": 0, 00:04:24.481 "reconnect_delay_sec": 0, 00:04:24.481 "fast_io_fail_timeout_sec": 0, 00:04:24.481 "disable_auto_failback": false, 00:04:24.481 "generate_uuids": false, 00:04:24.481 "transport_tos": 0, 00:04:24.481 "nvme_error_stat": false, 00:04:24.481 "rdma_srq_size": 0, 00:04:24.481 "io_path_stat": false, 00:04:24.481 "allow_accel_sequence": false, 00:04:24.481 "rdma_max_cq_size": 0, 00:04:24.481 "rdma_cm_event_timeout_ms": 0, 00:04:24.481 "dhchap_digests": [ 00:04:24.481 "sha256", 00:04:24.481 "sha384", 00:04:24.481 "sha512" 00:04:24.481 ], 00:04:24.481 "dhchap_dhgroups": [ 00:04:24.481 "null", 00:04:24.481 "ffdhe2048", 00:04:24.481 "ffdhe3072", 00:04:24.481 "ffdhe4096", 00:04:24.481 "ffdhe6144", 00:04:24.481 "ffdhe8192" 00:04:24.481 ] 00:04:24.481 } 00:04:24.481 }, 00:04:24.481 { 00:04:24.481 "method": "bdev_nvme_set_hotplug", 00:04:24.481 "params": { 00:04:24.481 "period_us": 100000, 00:04:24.481 "enable": false 00:04:24.481 } 00:04:24.481 }, 00:04:24.481 { 00:04:24.481 "method": "bdev_wait_for_examine" 00:04:24.481 } 00:04:24.481 ] 00:04:24.481 }, 00:04:24.481 { 00:04:24.481 "subsystem": "scsi", 00:04:24.481 "config": null 00:04:24.481 }, 00:04:24.481 { 00:04:24.481 "subsystem": "scheduler", 00:04:24.481 "config": [ 00:04:24.481 { 00:04:24.481 "method": "framework_set_scheduler", 00:04:24.481 "params": { 00:04:24.481 "name": "static" 00:04:24.481 } 00:04:24.481 } 00:04:24.481 ] 00:04:24.481 }, 00:04:24.481 { 00:04:24.481 "subsystem": "vhost_scsi", 00:04:24.481 "config": [] 00:04:24.481 }, 00:04:24.481 { 00:04:24.481 "subsystem": "vhost_blk", 00:04:24.481 "config": [] 00:04:24.481 }, 00:04:24.481 { 00:04:24.481 "subsystem": "ublk", 00:04:24.481 "config": [] 00:04:24.481 }, 00:04:24.481 { 00:04:24.481 "subsystem": "nbd", 00:04:24.481 "config": [] 00:04:24.481 }, 00:04:24.481 { 00:04:24.481 "subsystem": "nvmf", 00:04:24.481 "config": [ 00:04:24.481 { 00:04:24.481 "method": "nvmf_set_config", 00:04:24.481 "params": { 00:04:24.481 "discovery_filter": "match_any", 00:04:24.481 "admin_cmd_passthru": { 00:04:24.481 "identify_ctrlr": false 00:04:24.481 } 00:04:24.481 } 00:04:24.481 }, 00:04:24.481 { 00:04:24.481 "method": "nvmf_set_max_subsystems", 00:04:24.481 "params": { 00:04:24.481 "max_subsystems": 1024 00:04:24.481 } 00:04:24.481 }, 00:04:24.481 { 00:04:24.481 "method": "nvmf_set_crdt", 00:04:24.481 "params": { 00:04:24.481 "crdt1": 0, 00:04:24.481 "crdt2": 0, 00:04:24.481 "crdt3": 0 00:04:24.481 } 00:04:24.481 }, 00:04:24.481 { 00:04:24.481 "method": "nvmf_create_transport", 00:04:24.481 "params": { 00:04:24.481 "trtype": "TCP", 00:04:24.481 "max_queue_depth": 128, 00:04:24.481 "max_io_qpairs_per_ctrlr": 127, 00:04:24.481 "in_capsule_data_size": 4096, 00:04:24.481 "max_io_size": 131072, 00:04:24.481 "io_unit_size": 131072, 00:04:24.481 "max_aq_depth": 128, 00:04:24.481 "num_shared_buffers": 511, 00:04:24.481 "buf_cache_size": 4294967295, 00:04:24.481 "dif_insert_or_strip": false, 00:04:24.481 "zcopy": false, 00:04:24.481 "c2h_success": true, 00:04:24.481 "sock_priority": 0, 00:04:24.481 "abort_timeout_sec": 1, 00:04:24.481 "ack_timeout": 0, 00:04:24.481 "data_wr_pool_size": 0 00:04:24.481 } 00:04:24.481 } 00:04:24.481 ] 00:04:24.481 }, 00:04:24.481 { 00:04:24.481 "subsystem": "iscsi", 00:04:24.481 "config": [ 00:04:24.481 { 00:04:24.481 "method": "iscsi_set_options", 00:04:24.481 "params": { 00:04:24.481 "node_base": "iqn.2016-06.io.spdk", 00:04:24.481 "max_sessions": 128, 00:04:24.481 "max_connections_per_session": 2, 00:04:24.481 "max_queue_depth": 64, 00:04:24.481 "default_time2wait": 2, 00:04:24.481 "default_time2retain": 20, 00:04:24.481 "first_burst_length": 8192, 00:04:24.481 "immediate_data": true, 00:04:24.481 "allow_duplicated_isid": false, 00:04:24.481 "error_recovery_level": 0, 00:04:24.481 "nop_timeout": 60, 00:04:24.481 "nop_in_interval": 30, 00:04:24.481 "disable_chap": false, 00:04:24.481 "require_chap": false, 00:04:24.481 "mutual_chap": false, 00:04:24.481 "chap_group": 0, 00:04:24.481 "max_large_datain_per_connection": 64, 00:04:24.481 "max_r2t_per_connection": 4, 00:04:24.481 "pdu_pool_size": 36864, 00:04:24.481 "immediate_data_pool_size": 16384, 00:04:24.481 "data_out_pool_size": 2048 00:04:24.481 } 00:04:24.481 } 00:04:24.481 ] 00:04:24.481 } 00:04:24.481 ] 00:04:24.481 } 00:04:24.481 22:15:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:24.481 22:15:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59076 00:04:24.481 22:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59076 ']' 00:04:24.481 22:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59076 00:04:24.481 22:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:24.481 22:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:24.481 22:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59076 00:04:24.481 killing process with pid 59076 00:04:24.481 22:15:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:24.481 22:15:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:24.481 22:15:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59076' 00:04:24.481 22:15:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59076 00:04:24.481 22:15:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59076 00:04:24.740 22:15:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59103 00:04:24.740 22:15:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:24.740 22:15:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:30.004 22:15:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59103 00:04:30.004 22:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59103 ']' 00:04:30.004 22:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59103 00:04:30.004 22:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:30.004 22:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:30.004 22:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59103 00:04:30.004 killing process with pid 59103 00:04:30.004 22:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:30.004 22:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:30.004 22:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59103' 00:04:30.004 22:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59103 00:04:30.004 22:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59103 00:04:30.262 22:15:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:30.262 22:15:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:30.262 ************************************ 00:04:30.262 END TEST skip_rpc_with_json 00:04:30.262 ************************************ 00:04:30.262 00:04:30.262 real 0m6.857s 00:04:30.262 user 0m6.575s 00:04:30.262 sys 0m0.596s 00:04:30.262 22:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.262 22:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.262 22:15:43 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:30.262 22:15:43 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:30.262 22:15:43 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.262 22:15:43 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.262 22:15:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.262 ************************************ 00:04:30.262 START TEST skip_rpc_with_delay 00:04:30.262 ************************************ 00:04:30.262 22:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:30.262 22:15:43 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:30.262 22:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:30.262 22:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:30.262 22:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:30.262 22:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:30.262 22:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:30.262 22:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:30.262 22:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:30.262 22:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:30.262 22:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:30.262 22:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:30.262 22:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:30.262 [2024-07-15 22:15:43.851195] app.c: 837:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:30.262 [2024-07-15 22:15:43.851313] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:30.262 22:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:30.262 22:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:30.262 22:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:30.262 22:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:30.262 00:04:30.262 real 0m0.078s 00:04:30.262 user 0m0.040s 00:04:30.262 sys 0m0.038s 00:04:30.262 22:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.262 ************************************ 00:04:30.262 END TEST skip_rpc_with_delay 00:04:30.262 ************************************ 00:04:30.262 22:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:30.520 22:15:43 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:30.520 22:15:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:30.520 22:15:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:30.520 22:15:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:30.520 22:15:43 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.520 22:15:43 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.520 22:15:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.520 ************************************ 00:04:30.520 START TEST exit_on_failed_rpc_init 00:04:30.520 ************************************ 00:04:30.520 22:15:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:30.520 22:15:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59207 00:04:30.520 22:15:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:30.520 22:15:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59207 00:04:30.520 22:15:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 59207 ']' 00:04:30.520 22:15:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.520 22:15:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:30.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.520 22:15:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.520 22:15:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:30.520 22:15:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:30.520 [2024-07-15 22:15:44.004230] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:04:30.520 [2024-07-15 22:15:44.004305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59207 ] 00:04:30.520 [2024-07-15 22:15:44.147362] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.778 [2024-07-15 22:15:44.244546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.778 [2024-07-15 22:15:44.286282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:31.345 22:15:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:31.345 22:15:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:31.345 22:15:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.345 22:15:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:31.345 22:15:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:31.345 22:15:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:31.345 22:15:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.345 22:15:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:31.345 22:15:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.345 22:15:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:31.345 22:15:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.345 22:15:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:31.345 22:15:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.345 22:15:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:31.345 22:15:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:31.345 [2024-07-15 22:15:44.910407] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:04:31.345 [2024-07-15 22:15:44.910830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59225 ] 00:04:31.604 [2024-07-15 22:15:45.053634] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.604 [2024-07-15 22:15:45.154009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.604 [2024-07-15 22:15:45.154264] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:31.604 [2024-07-15 22:15:45.154429] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:31.604 [2024-07-15 22:15:45.154460] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:31.864 22:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:31.864 22:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:31.864 22:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:31.864 22:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:31.864 22:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:31.864 22:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:31.864 22:15:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:31.864 22:15:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59207 00:04:31.864 22:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 59207 ']' 00:04:31.864 22:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 59207 00:04:31.864 22:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:31.864 22:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:31.864 22:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59207 00:04:31.864 22:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:31.864 22:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:31.864 22:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59207' 00:04:31.864 killing process with pid 59207 00:04:31.864 22:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 59207 00:04:31.864 22:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 59207 00:04:32.123 ************************************ 00:04:32.123 END TEST exit_on_failed_rpc_init 00:04:32.123 ************************************ 00:04:32.123 00:04:32.123 real 0m1.662s 00:04:32.123 user 0m1.883s 00:04:32.123 sys 0m0.382s 00:04:32.123 22:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.123 22:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:32.123 22:15:45 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:32.123 22:15:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:32.123 00:04:32.123 real 0m14.407s 00:04:32.123 user 0m13.676s 00:04:32.123 sys 0m1.555s 00:04:32.123 ************************************ 00:04:32.123 END TEST skip_rpc 00:04:32.123 ************************************ 00:04:32.123 22:15:45 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.123 22:15:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.123 22:15:45 -- common/autotest_common.sh@1142 -- # return 0 00:04:32.123 22:15:45 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:32.123 22:15:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.123 22:15:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.123 22:15:45 -- common/autotest_common.sh@10 -- # set +x 00:04:32.123 ************************************ 00:04:32.123 START TEST rpc_client 00:04:32.123 ************************************ 00:04:32.123 22:15:45 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:32.381 * Looking for test storage... 00:04:32.381 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:32.381 22:15:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:32.381 OK 00:04:32.381 22:15:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:32.381 00:04:32.381 real 0m0.156s 00:04:32.381 user 0m0.066s 00:04:32.381 sys 0m0.099s 00:04:32.381 22:15:45 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.381 22:15:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:32.381 ************************************ 00:04:32.381 END TEST rpc_client 00:04:32.381 ************************************ 00:04:32.381 22:15:45 -- common/autotest_common.sh@1142 -- # return 0 00:04:32.381 22:15:45 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:32.381 22:15:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.381 22:15:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.381 22:15:45 -- common/autotest_common.sh@10 -- # set +x 00:04:32.381 ************************************ 00:04:32.381 START TEST json_config 00:04:32.381 ************************************ 00:04:32.382 22:15:45 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:32.641 22:15:46 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:32.641 22:15:46 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:32.641 22:15:46 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:32.641 22:15:46 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:32.641 22:15:46 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.641 22:15:46 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.641 22:15:46 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.641 22:15:46 json_config -- paths/export.sh@5 -- # export PATH 00:04:32.641 22:15:46 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@47 -- # : 0 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:32.641 22:15:46 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:32.641 22:15:46 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:32.641 22:15:46 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:32.641 22:15:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:32.641 22:15:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:32.641 22:15:46 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:32.641 22:15:46 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:32.641 22:15:46 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:32.641 22:15:46 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:32.641 22:15:46 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:32.641 22:15:46 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:32.641 22:15:46 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:32.641 22:15:46 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:32.641 22:15:46 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:32.641 22:15:46 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:32.641 22:15:46 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:32.641 INFO: JSON configuration test init 00:04:32.641 22:15:46 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:32.641 22:15:46 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:32.641 22:15:46 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:32.641 22:15:46 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:32.641 22:15:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.641 22:15:46 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:32.641 22:15:46 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:32.641 22:15:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.641 22:15:46 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:32.641 22:15:46 json_config -- json_config/common.sh@9 -- # local app=target 00:04:32.641 22:15:46 json_config -- json_config/common.sh@10 -- # shift 00:04:32.641 22:15:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:32.641 22:15:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:32.642 22:15:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:32.642 22:15:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.642 22:15:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.642 22:15:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59349 00:04:32.642 Waiting for target to run... 00:04:32.642 22:15:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:32.642 22:15:46 json_config -- json_config/common.sh@25 -- # waitforlisten 59349 /var/tmp/spdk_tgt.sock 00:04:32.642 22:15:46 json_config -- common/autotest_common.sh@829 -- # '[' -z 59349 ']' 00:04:32.642 22:15:46 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:32.642 22:15:46 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:32.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:32.642 22:15:46 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:32.642 22:15:46 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:32.642 22:15:46 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:32.642 22:15:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.642 [2024-07-15 22:15:46.183023] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:04:32.642 [2024-07-15 22:15:46.183096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59349 ] 00:04:33.207 [2024-07-15 22:15:46.546536] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.207 [2024-07-15 22:15:46.627512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.464 22:15:47 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:33.464 00:04:33.464 22:15:47 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:33.464 22:15:47 json_config -- json_config/common.sh@26 -- # echo '' 00:04:33.464 22:15:47 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:33.464 22:15:47 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:33.464 22:15:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:33.464 22:15:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.464 22:15:47 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:33.464 22:15:47 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:33.464 22:15:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:33.464 22:15:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.464 22:15:47 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:33.464 22:15:47 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:33.464 22:15:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:33.721 [2024-07-15 22:15:47.285942] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:33.990 22:15:47 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:33.990 22:15:47 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:33.990 22:15:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:33.990 22:15:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.990 22:15:47 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:33.990 22:15:47 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:33.990 22:15:47 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:33.990 22:15:47 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:33.990 22:15:47 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:33.990 22:15:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:34.247 22:15:47 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:34.247 22:15:47 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:34.247 22:15:47 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:34.247 22:15:47 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:34.247 22:15:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:34.247 22:15:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.247 22:15:47 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:34.247 22:15:47 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:34.247 22:15:47 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:34.247 22:15:47 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:34.247 22:15:47 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:34.247 22:15:47 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:34.247 22:15:47 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:34.247 22:15:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:34.247 22:15:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.247 22:15:47 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:34.247 22:15:47 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:34.247 22:15:47 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:34.247 22:15:47 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:34.247 22:15:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:34.504 MallocForNvmf0 00:04:34.504 22:15:47 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:34.505 22:15:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:34.761 MallocForNvmf1 00:04:34.761 22:15:48 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:34.761 22:15:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:34.761 [2024-07-15 22:15:48.393919] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:35.018 22:15:48 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:35.018 22:15:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:35.018 22:15:48 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:35.018 22:15:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:35.274 22:15:48 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:35.274 22:15:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:35.532 22:15:49 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:35.532 22:15:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:35.792 [2024-07-15 22:15:49.234014] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:35.792 22:15:49 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:35.792 22:15:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:35.792 22:15:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.792 22:15:49 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:35.792 22:15:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:35.792 22:15:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.792 22:15:49 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:35.792 22:15:49 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:35.792 22:15:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:36.051 MallocBdevForConfigChangeCheck 00:04:36.051 22:15:49 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:36.051 22:15:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:36.051 22:15:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.051 22:15:49 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:36.051 22:15:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:36.618 INFO: shutting down applications... 00:04:36.618 22:15:49 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:36.618 22:15:49 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:36.618 22:15:49 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:36.618 22:15:49 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:36.618 22:15:49 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:36.618 Calling clear_iscsi_subsystem 00:04:36.618 Calling clear_nvmf_subsystem 00:04:36.618 Calling clear_nbd_subsystem 00:04:36.618 Calling clear_ublk_subsystem 00:04:36.618 Calling clear_vhost_blk_subsystem 00:04:36.618 Calling clear_vhost_scsi_subsystem 00:04:36.618 Calling clear_bdev_subsystem 00:04:36.877 22:15:50 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:36.877 22:15:50 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:36.877 22:15:50 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:36.877 22:15:50 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:36.877 22:15:50 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:36.877 22:15:50 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:37.136 22:15:50 json_config -- json_config/json_config.sh@345 -- # break 00:04:37.136 22:15:50 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:37.136 22:15:50 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:37.136 22:15:50 json_config -- json_config/common.sh@31 -- # local app=target 00:04:37.136 22:15:50 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:37.136 22:15:50 json_config -- json_config/common.sh@35 -- # [[ -n 59349 ]] 00:04:37.136 22:15:50 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59349 00:04:37.136 22:15:50 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:37.136 22:15:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.136 22:15:50 json_config -- json_config/common.sh@41 -- # kill -0 59349 00:04:37.136 22:15:50 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:37.704 22:15:51 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:37.704 22:15:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.704 22:15:51 json_config -- json_config/common.sh@41 -- # kill -0 59349 00:04:37.704 22:15:51 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:37.704 22:15:51 json_config -- json_config/common.sh@43 -- # break 00:04:37.705 22:15:51 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:37.705 SPDK target shutdown done 00:04:37.705 22:15:51 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:37.705 INFO: relaunching applications... 00:04:37.705 22:15:51 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:37.705 22:15:51 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:37.705 22:15:51 json_config -- json_config/common.sh@9 -- # local app=target 00:04:37.705 22:15:51 json_config -- json_config/common.sh@10 -- # shift 00:04:37.705 22:15:51 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:37.705 22:15:51 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:37.705 22:15:51 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:37.705 22:15:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.705 22:15:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.705 22:15:51 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59528 00:04:37.705 Waiting for target to run... 00:04:37.705 22:15:51 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:37.705 22:15:51 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:37.705 22:15:51 json_config -- json_config/common.sh@25 -- # waitforlisten 59528 /var/tmp/spdk_tgt.sock 00:04:37.705 22:15:51 json_config -- common/autotest_common.sh@829 -- # '[' -z 59528 ']' 00:04:37.705 22:15:51 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:37.705 22:15:51 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:37.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:37.705 22:15:51 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:37.705 22:15:51 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:37.705 22:15:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.705 [2024-07-15 22:15:51.156929] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:04:37.705 [2024-07-15 22:15:51.157011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59528 ] 00:04:37.964 [2024-07-15 22:15:51.541085] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.223 [2024-07-15 22:15:51.653570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.223 [2024-07-15 22:15:51.778729] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:38.483 [2024-07-15 22:15:51.998931] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:38.483 [2024-07-15 22:15:52.030963] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:38.483 22:15:52 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.483 00:04:38.483 22:15:52 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:38.483 22:15:52 json_config -- json_config/common.sh@26 -- # echo '' 00:04:38.483 22:15:52 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:38.483 INFO: Checking if target configuration is the same... 00:04:38.483 22:15:52 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:38.483 22:15:52 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:38.483 22:15:52 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:38.483 22:15:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:38.483 + '[' 2 -ne 2 ']' 00:04:38.483 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:38.483 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:38.483 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:38.483 +++ basename /dev/fd/62 00:04:38.483 ++ mktemp /tmp/62.XXX 00:04:38.483 + tmp_file_1=/tmp/62.IWn 00:04:38.483 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:38.483 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:38.483 + tmp_file_2=/tmp/spdk_tgt_config.json.Idy 00:04:38.483 + ret=0 00:04:38.483 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:39.051 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:39.051 + diff -u /tmp/62.IWn /tmp/spdk_tgt_config.json.Idy 00:04:39.051 INFO: JSON config files are the same 00:04:39.051 + echo 'INFO: JSON config files are the same' 00:04:39.051 + rm /tmp/62.IWn /tmp/spdk_tgt_config.json.Idy 00:04:39.051 + exit 0 00:04:39.051 22:15:52 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:39.051 INFO: changing configuration and checking if this can be detected... 00:04:39.051 22:15:52 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:39.051 22:15:52 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:39.051 22:15:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:39.310 22:15:52 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:39.310 22:15:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:39.310 22:15:52 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:39.310 + '[' 2 -ne 2 ']' 00:04:39.310 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:39.310 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:39.310 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:39.310 +++ basename /dev/fd/62 00:04:39.310 ++ mktemp /tmp/62.XXX 00:04:39.310 + tmp_file_1=/tmp/62.Yst 00:04:39.310 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:39.310 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:39.310 + tmp_file_2=/tmp/spdk_tgt_config.json.6V8 00:04:39.310 + ret=0 00:04:39.310 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:39.568 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:39.568 + diff -u /tmp/62.Yst /tmp/spdk_tgt_config.json.6V8 00:04:39.568 + ret=1 00:04:39.568 + echo '=== Start of file: /tmp/62.Yst ===' 00:04:39.568 + cat /tmp/62.Yst 00:04:39.568 + echo '=== End of file: /tmp/62.Yst ===' 00:04:39.568 + echo '' 00:04:39.568 + echo '=== Start of file: /tmp/spdk_tgt_config.json.6V8 ===' 00:04:39.568 + cat /tmp/spdk_tgt_config.json.6V8 00:04:39.568 + echo '=== End of file: /tmp/spdk_tgt_config.json.6V8 ===' 00:04:39.568 + echo '' 00:04:39.568 + rm /tmp/62.Yst /tmp/spdk_tgt_config.json.6V8 00:04:39.568 + exit 1 00:04:39.568 INFO: configuration change detected. 00:04:39.568 22:15:53 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:39.568 22:15:53 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:39.568 22:15:53 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:39.568 22:15:53 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:39.568 22:15:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.568 22:15:53 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:39.568 22:15:53 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:39.568 22:15:53 json_config -- json_config/json_config.sh@317 -- # [[ -n 59528 ]] 00:04:39.568 22:15:53 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:39.568 22:15:53 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:39.568 22:15:53 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:39.568 22:15:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.568 22:15:53 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:39.568 22:15:53 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:39.568 22:15:53 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:39.568 22:15:53 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:39.568 22:15:53 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:39.568 22:15:53 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:39.568 22:15:53 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:39.568 22:15:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.568 22:15:53 json_config -- json_config/json_config.sh@323 -- # killprocess 59528 00:04:39.568 22:15:53 json_config -- common/autotest_common.sh@948 -- # '[' -z 59528 ']' 00:04:39.568 22:15:53 json_config -- common/autotest_common.sh@952 -- # kill -0 59528 00:04:39.568 22:15:53 json_config -- common/autotest_common.sh@953 -- # uname 00:04:39.568 22:15:53 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:39.568 22:15:53 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59528 00:04:39.825 killing process with pid 59528 00:04:39.825 22:15:53 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:39.825 22:15:53 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:39.825 22:15:53 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59528' 00:04:39.825 22:15:53 json_config -- common/autotest_common.sh@967 -- # kill 59528 00:04:39.825 22:15:53 json_config -- common/autotest_common.sh@972 -- # wait 59528 00:04:40.081 22:15:53 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:40.081 22:15:53 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:40.081 22:15:53 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:40.081 22:15:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.081 INFO: Success 00:04:40.081 22:15:53 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:40.081 22:15:53 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:40.081 ************************************ 00:04:40.081 END TEST json_config 00:04:40.081 ************************************ 00:04:40.081 00:04:40.081 real 0m7.679s 00:04:40.081 user 0m10.362s 00:04:40.081 sys 0m1.810s 00:04:40.081 22:15:53 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.081 22:15:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.081 22:15:53 -- common/autotest_common.sh@1142 -- # return 0 00:04:40.081 22:15:53 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:40.338 22:15:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.338 22:15:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.338 22:15:53 -- common/autotest_common.sh@10 -- # set +x 00:04:40.338 ************************************ 00:04:40.338 START TEST json_config_extra_key 00:04:40.338 ************************************ 00:04:40.338 22:15:53 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:40.338 22:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:40.338 22:15:53 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:40.338 22:15:53 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:40.338 22:15:53 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:40.338 22:15:53 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.338 22:15:53 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.338 22:15:53 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.338 22:15:53 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:40.338 22:15:53 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:40.338 22:15:53 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:40.338 22:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:40.338 22:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:40.338 22:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:40.338 22:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:40.338 22:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:40.338 22:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:40.338 22:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:40.338 22:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:40.338 22:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:40.338 22:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:40.338 INFO: launching applications... 00:04:40.338 22:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:40.338 Waiting for target to run... 00:04:40.338 22:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:40.338 22:15:53 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:40.338 22:15:53 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:40.338 22:15:53 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:40.338 22:15:53 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:40.338 22:15:53 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:40.338 22:15:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:40.338 22:15:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:40.338 22:15:53 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59669 00:04:40.338 22:15:53 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:40.338 22:15:53 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59669 /var/tmp/spdk_tgt.sock 00:04:40.338 22:15:53 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 59669 ']' 00:04:40.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:40.338 22:15:53 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:40.338 22:15:53 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:40.339 22:15:53 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:40.339 22:15:53 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:40.339 22:15:53 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:40.339 22:15:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:40.339 [2024-07-15 22:15:53.919234] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:04:40.339 [2024-07-15 22:15:53.919300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59669 ] 00:04:40.958 [2024-07-15 22:15:54.276516] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.958 [2024-07-15 22:15:54.378044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.958 [2024-07-15 22:15:54.398165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:41.216 22:15:54 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:41.216 00:04:41.216 INFO: shutting down applications... 00:04:41.216 22:15:54 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:41.216 22:15:54 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:41.216 22:15:54 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:41.216 22:15:54 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:41.216 22:15:54 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:41.216 22:15:54 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:41.216 22:15:54 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59669 ]] 00:04:41.216 22:15:54 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59669 00:04:41.216 22:15:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:41.216 22:15:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:41.216 22:15:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59669 00:04:41.216 22:15:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:41.780 22:15:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:41.780 22:15:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:41.780 22:15:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59669 00:04:41.780 22:15:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:42.349 22:15:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:42.349 22:15:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.349 22:15:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59669 00:04:42.349 SPDK target shutdown done 00:04:42.349 22:15:55 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:42.349 22:15:55 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:42.349 22:15:55 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:42.349 22:15:55 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:42.349 22:15:55 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:42.349 Success 00:04:42.349 00:04:42.349 real 0m2.056s 00:04:42.349 user 0m1.597s 00:04:42.349 sys 0m0.417s 00:04:42.349 22:15:55 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.349 22:15:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:42.349 ************************************ 00:04:42.349 END TEST json_config_extra_key 00:04:42.349 ************************************ 00:04:42.349 22:15:55 -- common/autotest_common.sh@1142 -- # return 0 00:04:42.349 22:15:55 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:42.349 22:15:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.349 22:15:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.349 22:15:55 -- common/autotest_common.sh@10 -- # set +x 00:04:42.349 ************************************ 00:04:42.349 START TEST alias_rpc 00:04:42.349 ************************************ 00:04:42.349 22:15:55 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:42.608 * Looking for test storage... 00:04:42.608 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:42.608 22:15:55 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:42.608 22:15:55 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59740 00:04:42.608 22:15:55 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.608 22:15:55 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59740 00:04:42.608 22:15:55 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 59740 ']' 00:04:42.608 22:15:55 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.608 22:15:55 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.608 22:15:55 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.608 22:15:55 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.608 22:15:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.608 [2024-07-15 22:15:56.048193] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:04:42.608 [2024-07-15 22:15:56.048262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59740 ] 00:04:42.608 [2024-07-15 22:15:56.185520] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.868 [2024-07-15 22:15:56.351574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.868 [2024-07-15 22:15:56.429588] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:43.434 22:15:56 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.434 22:15:56 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:43.434 22:15:56 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:43.692 22:15:57 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59740 00:04:43.692 22:15:57 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 59740 ']' 00:04:43.692 22:15:57 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 59740 00:04:43.692 22:15:57 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:43.692 22:15:57 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:43.692 22:15:57 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59740 00:04:43.692 killing process with pid 59740 00:04:43.692 22:15:57 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:43.692 22:15:57 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:43.692 22:15:57 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59740' 00:04:43.692 22:15:57 alias_rpc -- common/autotest_common.sh@967 -- # kill 59740 00:04:43.692 22:15:57 alias_rpc -- common/autotest_common.sh@972 -- # wait 59740 00:04:44.260 ************************************ 00:04:44.260 END TEST alias_rpc 00:04:44.260 ************************************ 00:04:44.260 00:04:44.260 real 0m1.991s 00:04:44.260 user 0m1.907s 00:04:44.260 sys 0m0.597s 00:04:44.260 22:15:57 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.260 22:15:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.520 22:15:57 -- common/autotest_common.sh@1142 -- # return 0 00:04:44.520 22:15:57 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:44.520 22:15:57 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:44.520 22:15:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.520 22:15:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.520 22:15:57 -- common/autotest_common.sh@10 -- # set +x 00:04:44.520 ************************************ 00:04:44.520 START TEST spdkcli_tcp 00:04:44.520 ************************************ 00:04:44.520 22:15:57 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:44.520 * Looking for test storage... 00:04:44.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:44.520 22:15:58 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:44.520 22:15:58 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:44.520 22:15:58 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:44.520 22:15:58 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:44.520 22:15:58 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:44.520 22:15:58 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:44.520 22:15:58 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:44.520 22:15:58 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:44.520 22:15:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:44.520 22:15:58 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59816 00:04:44.520 22:15:58 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:44.520 22:15:58 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59816 00:04:44.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.520 22:15:58 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 59816 ']' 00:04:44.520 22:15:58 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.520 22:15:58 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.520 22:15:58 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.520 22:15:58 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.520 22:15:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:44.520 [2024-07-15 22:15:58.132316] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:04:44.520 [2024-07-15 22:15:58.132994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59816 ] 00:04:44.779 [2024-07-15 22:15:58.266371] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:45.040 [2024-07-15 22:15:58.426363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.040 [2024-07-15 22:15:58.426372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.040 [2024-07-15 22:15:58.511758] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:45.608 22:15:59 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:45.608 22:15:59 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:45.608 22:15:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59833 00:04:45.608 22:15:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:45.608 22:15:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:45.608 [ 00:04:45.608 "bdev_malloc_delete", 00:04:45.608 "bdev_malloc_create", 00:04:45.608 "bdev_null_resize", 00:04:45.608 "bdev_null_delete", 00:04:45.608 "bdev_null_create", 00:04:45.608 "bdev_nvme_cuse_unregister", 00:04:45.608 "bdev_nvme_cuse_register", 00:04:45.608 "bdev_opal_new_user", 00:04:45.608 "bdev_opal_set_lock_state", 00:04:45.608 "bdev_opal_delete", 00:04:45.608 "bdev_opal_get_info", 00:04:45.609 "bdev_opal_create", 00:04:45.609 "bdev_nvme_opal_revert", 00:04:45.609 "bdev_nvme_opal_init", 00:04:45.609 "bdev_nvme_send_cmd", 00:04:45.609 "bdev_nvme_get_path_iostat", 00:04:45.609 "bdev_nvme_get_mdns_discovery_info", 00:04:45.609 "bdev_nvme_stop_mdns_discovery", 00:04:45.609 "bdev_nvme_start_mdns_discovery", 00:04:45.609 "bdev_nvme_set_multipath_policy", 00:04:45.609 "bdev_nvme_set_preferred_path", 00:04:45.609 "bdev_nvme_get_io_paths", 00:04:45.609 "bdev_nvme_remove_error_injection", 00:04:45.609 "bdev_nvme_add_error_injection", 00:04:45.609 "bdev_nvme_get_discovery_info", 00:04:45.609 "bdev_nvme_stop_discovery", 00:04:45.609 "bdev_nvme_start_discovery", 00:04:45.609 "bdev_nvme_get_controller_health_info", 00:04:45.609 "bdev_nvme_disable_controller", 00:04:45.609 "bdev_nvme_enable_controller", 00:04:45.609 "bdev_nvme_reset_controller", 00:04:45.609 "bdev_nvme_get_transport_statistics", 00:04:45.609 "bdev_nvme_apply_firmware", 00:04:45.609 "bdev_nvme_detach_controller", 00:04:45.609 "bdev_nvme_get_controllers", 00:04:45.609 "bdev_nvme_attach_controller", 00:04:45.609 "bdev_nvme_set_hotplug", 00:04:45.609 "bdev_nvme_set_options", 00:04:45.609 "bdev_passthru_delete", 00:04:45.609 "bdev_passthru_create", 00:04:45.609 "bdev_lvol_set_parent_bdev", 00:04:45.609 "bdev_lvol_set_parent", 00:04:45.609 "bdev_lvol_check_shallow_copy", 00:04:45.609 "bdev_lvol_start_shallow_copy", 00:04:45.609 "bdev_lvol_grow_lvstore", 00:04:45.609 "bdev_lvol_get_lvols", 00:04:45.609 "bdev_lvol_get_lvstores", 00:04:45.609 "bdev_lvol_delete", 00:04:45.609 "bdev_lvol_set_read_only", 00:04:45.609 "bdev_lvol_resize", 00:04:45.609 "bdev_lvol_decouple_parent", 00:04:45.609 "bdev_lvol_inflate", 00:04:45.609 "bdev_lvol_rename", 00:04:45.609 "bdev_lvol_clone_bdev", 00:04:45.609 "bdev_lvol_clone", 00:04:45.609 "bdev_lvol_snapshot", 00:04:45.609 "bdev_lvol_create", 00:04:45.609 "bdev_lvol_delete_lvstore", 00:04:45.609 "bdev_lvol_rename_lvstore", 00:04:45.609 "bdev_lvol_create_lvstore", 00:04:45.609 "bdev_raid_set_options", 00:04:45.609 "bdev_raid_remove_base_bdev", 00:04:45.609 "bdev_raid_add_base_bdev", 00:04:45.609 "bdev_raid_delete", 00:04:45.609 "bdev_raid_create", 00:04:45.609 "bdev_raid_get_bdevs", 00:04:45.609 "bdev_error_inject_error", 00:04:45.609 "bdev_error_delete", 00:04:45.609 "bdev_error_create", 00:04:45.609 "bdev_split_delete", 00:04:45.609 "bdev_split_create", 00:04:45.609 "bdev_delay_delete", 00:04:45.609 "bdev_delay_create", 00:04:45.609 "bdev_delay_update_latency", 00:04:45.609 "bdev_zone_block_delete", 00:04:45.609 "bdev_zone_block_create", 00:04:45.609 "blobfs_create", 00:04:45.609 "blobfs_detect", 00:04:45.609 "blobfs_set_cache_size", 00:04:45.609 "bdev_aio_delete", 00:04:45.609 "bdev_aio_rescan", 00:04:45.609 "bdev_aio_create", 00:04:45.609 "bdev_ftl_set_property", 00:04:45.609 "bdev_ftl_get_properties", 00:04:45.609 "bdev_ftl_get_stats", 00:04:45.609 "bdev_ftl_unmap", 00:04:45.609 "bdev_ftl_unload", 00:04:45.609 "bdev_ftl_delete", 00:04:45.609 "bdev_ftl_load", 00:04:45.609 "bdev_ftl_create", 00:04:45.609 "bdev_virtio_attach_controller", 00:04:45.609 "bdev_virtio_scsi_get_devices", 00:04:45.609 "bdev_virtio_detach_controller", 00:04:45.609 "bdev_virtio_blk_set_hotplug", 00:04:45.609 "bdev_iscsi_delete", 00:04:45.609 "bdev_iscsi_create", 00:04:45.609 "bdev_iscsi_set_options", 00:04:45.609 "bdev_uring_delete", 00:04:45.609 "bdev_uring_rescan", 00:04:45.609 "bdev_uring_create", 00:04:45.609 "accel_error_inject_error", 00:04:45.609 "ioat_scan_accel_module", 00:04:45.609 "dsa_scan_accel_module", 00:04:45.609 "iaa_scan_accel_module", 00:04:45.609 "keyring_file_remove_key", 00:04:45.609 "keyring_file_add_key", 00:04:45.609 "keyring_linux_set_options", 00:04:45.609 "iscsi_get_histogram", 00:04:45.609 "iscsi_enable_histogram", 00:04:45.609 "iscsi_set_options", 00:04:45.609 "iscsi_get_auth_groups", 00:04:45.609 "iscsi_auth_group_remove_secret", 00:04:45.609 "iscsi_auth_group_add_secret", 00:04:45.609 "iscsi_delete_auth_group", 00:04:45.609 "iscsi_create_auth_group", 00:04:45.609 "iscsi_set_discovery_auth", 00:04:45.609 "iscsi_get_options", 00:04:45.609 "iscsi_target_node_request_logout", 00:04:45.609 "iscsi_target_node_set_redirect", 00:04:45.609 "iscsi_target_node_set_auth", 00:04:45.609 "iscsi_target_node_add_lun", 00:04:45.609 "iscsi_get_stats", 00:04:45.609 "iscsi_get_connections", 00:04:45.609 "iscsi_portal_group_set_auth", 00:04:45.609 "iscsi_start_portal_group", 00:04:45.609 "iscsi_delete_portal_group", 00:04:45.609 "iscsi_create_portal_group", 00:04:45.609 "iscsi_get_portal_groups", 00:04:45.609 "iscsi_delete_target_node", 00:04:45.609 "iscsi_target_node_remove_pg_ig_maps", 00:04:45.609 "iscsi_target_node_add_pg_ig_maps", 00:04:45.609 "iscsi_create_target_node", 00:04:45.609 "iscsi_get_target_nodes", 00:04:45.609 "iscsi_delete_initiator_group", 00:04:45.609 "iscsi_initiator_group_remove_initiators", 00:04:45.609 "iscsi_initiator_group_add_initiators", 00:04:45.609 "iscsi_create_initiator_group", 00:04:45.609 "iscsi_get_initiator_groups", 00:04:45.609 "nvmf_set_crdt", 00:04:45.609 "nvmf_set_config", 00:04:45.609 "nvmf_set_max_subsystems", 00:04:45.609 "nvmf_stop_mdns_prr", 00:04:45.609 "nvmf_publish_mdns_prr", 00:04:45.609 "nvmf_subsystem_get_listeners", 00:04:45.609 "nvmf_subsystem_get_qpairs", 00:04:45.609 "nvmf_subsystem_get_controllers", 00:04:45.609 "nvmf_get_stats", 00:04:45.609 "nvmf_get_transports", 00:04:45.609 "nvmf_create_transport", 00:04:45.609 "nvmf_get_targets", 00:04:45.609 "nvmf_delete_target", 00:04:45.609 "nvmf_create_target", 00:04:45.609 "nvmf_subsystem_allow_any_host", 00:04:45.609 "nvmf_subsystem_remove_host", 00:04:45.609 "nvmf_subsystem_add_host", 00:04:45.609 "nvmf_ns_remove_host", 00:04:45.609 "nvmf_ns_add_host", 00:04:45.609 "nvmf_subsystem_remove_ns", 00:04:45.609 "nvmf_subsystem_add_ns", 00:04:45.609 "nvmf_subsystem_listener_set_ana_state", 00:04:45.609 "nvmf_discovery_get_referrals", 00:04:45.609 "nvmf_discovery_remove_referral", 00:04:45.609 "nvmf_discovery_add_referral", 00:04:45.609 "nvmf_subsystem_remove_listener", 00:04:45.609 "nvmf_subsystem_add_listener", 00:04:45.609 "nvmf_delete_subsystem", 00:04:45.609 "nvmf_create_subsystem", 00:04:45.609 "nvmf_get_subsystems", 00:04:45.609 "env_dpdk_get_mem_stats", 00:04:45.609 "nbd_get_disks", 00:04:45.609 "nbd_stop_disk", 00:04:45.609 "nbd_start_disk", 00:04:45.609 "ublk_recover_disk", 00:04:45.609 "ublk_get_disks", 00:04:45.609 "ublk_stop_disk", 00:04:45.609 "ublk_start_disk", 00:04:45.609 "ublk_destroy_target", 00:04:45.609 "ublk_create_target", 00:04:45.609 "virtio_blk_create_transport", 00:04:45.609 "virtio_blk_get_transports", 00:04:45.609 "vhost_controller_set_coalescing", 00:04:45.609 "vhost_get_controllers", 00:04:45.609 "vhost_delete_controller", 00:04:45.609 "vhost_create_blk_controller", 00:04:45.609 "vhost_scsi_controller_remove_target", 00:04:45.609 "vhost_scsi_controller_add_target", 00:04:45.609 "vhost_start_scsi_controller", 00:04:45.609 "vhost_create_scsi_controller", 00:04:45.609 "thread_set_cpumask", 00:04:45.609 "framework_get_governor", 00:04:45.609 "framework_get_scheduler", 00:04:45.609 "framework_set_scheduler", 00:04:45.609 "framework_get_reactors", 00:04:45.609 "thread_get_io_channels", 00:04:45.609 "thread_get_pollers", 00:04:45.609 "thread_get_stats", 00:04:45.609 "framework_monitor_context_switch", 00:04:45.609 "spdk_kill_instance", 00:04:45.609 "log_enable_timestamps", 00:04:45.609 "log_get_flags", 00:04:45.609 "log_clear_flag", 00:04:45.609 "log_set_flag", 00:04:45.609 "log_get_level", 00:04:45.609 "log_set_level", 00:04:45.609 "log_get_print_level", 00:04:45.609 "log_set_print_level", 00:04:45.609 "framework_enable_cpumask_locks", 00:04:45.609 "framework_disable_cpumask_locks", 00:04:45.609 "framework_wait_init", 00:04:45.609 "framework_start_init", 00:04:45.609 "scsi_get_devices", 00:04:45.609 "bdev_get_histogram", 00:04:45.609 "bdev_enable_histogram", 00:04:45.609 "bdev_set_qos_limit", 00:04:45.609 "bdev_set_qd_sampling_period", 00:04:45.609 "bdev_get_bdevs", 00:04:45.609 "bdev_reset_iostat", 00:04:45.609 "bdev_get_iostat", 00:04:45.609 "bdev_examine", 00:04:45.609 "bdev_wait_for_examine", 00:04:45.609 "bdev_set_options", 00:04:45.609 "notify_get_notifications", 00:04:45.609 "notify_get_types", 00:04:45.609 "accel_get_stats", 00:04:45.609 "accel_set_options", 00:04:45.609 "accel_set_driver", 00:04:45.609 "accel_crypto_key_destroy", 00:04:45.609 "accel_crypto_keys_get", 00:04:45.609 "accel_crypto_key_create", 00:04:45.609 "accel_assign_opc", 00:04:45.609 "accel_get_module_info", 00:04:45.609 "accel_get_opc_assignments", 00:04:45.609 "vmd_rescan", 00:04:45.609 "vmd_remove_device", 00:04:45.609 "vmd_enable", 00:04:45.609 "sock_get_default_impl", 00:04:45.609 "sock_set_default_impl", 00:04:45.609 "sock_impl_set_options", 00:04:45.609 "sock_impl_get_options", 00:04:45.609 "iobuf_get_stats", 00:04:45.609 "iobuf_set_options", 00:04:45.609 "framework_get_pci_devices", 00:04:45.609 "framework_get_config", 00:04:45.609 "framework_get_subsystems", 00:04:45.609 "trace_get_info", 00:04:45.609 "trace_get_tpoint_group_mask", 00:04:45.609 "trace_disable_tpoint_group", 00:04:45.609 "trace_enable_tpoint_group", 00:04:45.609 "trace_clear_tpoint_mask", 00:04:45.609 "trace_set_tpoint_mask", 00:04:45.609 "keyring_get_keys", 00:04:45.609 "spdk_get_version", 00:04:45.609 "rpc_get_methods" 00:04:45.609 ] 00:04:45.868 22:15:59 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:45.868 22:15:59 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.868 22:15:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:45.868 22:15:59 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:45.868 22:15:59 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59816 00:04:45.868 22:15:59 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 59816 ']' 00:04:45.868 22:15:59 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 59816 00:04:45.868 22:15:59 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:45.868 22:15:59 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:45.868 22:15:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59816 00:04:45.868 killing process with pid 59816 00:04:45.868 22:15:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:45.868 22:15:59 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:45.868 22:15:59 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59816' 00:04:45.868 22:15:59 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 59816 00:04:45.868 22:15:59 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 59816 00:04:46.437 ************************************ 00:04:46.437 END TEST spdkcli_tcp 00:04:46.437 ************************************ 00:04:46.437 00:04:46.437 real 0m2.082s 00:04:46.437 user 0m3.460s 00:04:46.437 sys 0m0.668s 00:04:46.437 22:16:00 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.437 22:16:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:46.697 22:16:00 -- common/autotest_common.sh@1142 -- # return 0 00:04:46.697 22:16:00 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:46.697 22:16:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.697 22:16:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.697 22:16:00 -- common/autotest_common.sh@10 -- # set +x 00:04:46.697 ************************************ 00:04:46.697 START TEST dpdk_mem_utility 00:04:46.697 ************************************ 00:04:46.697 22:16:00 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:46.697 * Looking for test storage... 00:04:46.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:46.697 22:16:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:46.697 22:16:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59908 00:04:46.697 22:16:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:46.697 22:16:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59908 00:04:46.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.697 22:16:00 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 59908 ']' 00:04:46.697 22:16:00 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.697 22:16:00 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.697 22:16:00 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.697 22:16:00 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.697 22:16:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:46.697 [2024-07-15 22:16:00.277456] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:04:46.697 [2024-07-15 22:16:00.277531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59908 ] 00:04:46.955 [2024-07-15 22:16:00.420230] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.955 [2024-07-15 22:16:00.520800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.955 [2024-07-15 22:16:00.563396] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:47.522 22:16:01 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:47.522 22:16:01 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:47.522 22:16:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:47.522 22:16:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:47.522 22:16:01 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.522 22:16:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:47.522 { 00:04:47.522 "filename": "/tmp/spdk_mem_dump.txt" 00:04:47.522 } 00:04:47.522 22:16:01 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.523 22:16:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:47.782 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:47.782 1 heaps totaling size 814.000000 MiB 00:04:47.782 size: 814.000000 MiB heap id: 0 00:04:47.782 end heaps---------- 00:04:47.782 8 mempools totaling size 598.116089 MiB 00:04:47.782 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:47.782 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:47.782 size: 84.521057 MiB name: bdev_io_59908 00:04:47.782 size: 51.011292 MiB name: evtpool_59908 00:04:47.782 size: 50.003479 MiB name: msgpool_59908 00:04:47.782 size: 21.763794 MiB name: PDU_Pool 00:04:47.782 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:47.782 size: 0.026123 MiB name: Session_Pool 00:04:47.782 end mempools------- 00:04:47.782 6 memzones totaling size 4.142822 MiB 00:04:47.782 size: 1.000366 MiB name: RG_ring_0_59908 00:04:47.782 size: 1.000366 MiB name: RG_ring_1_59908 00:04:47.782 size: 1.000366 MiB name: RG_ring_4_59908 00:04:47.782 size: 1.000366 MiB name: RG_ring_5_59908 00:04:47.782 size: 0.125366 MiB name: RG_ring_2_59908 00:04:47.782 size: 0.015991 MiB name: RG_ring_3_59908 00:04:47.782 end memzones------- 00:04:47.782 22:16:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:47.782 heap id: 0 total size: 814.000000 MiB number of busy elements: 298 number of free elements: 15 00:04:47.782 list of free elements. size: 12.472290 MiB 00:04:47.782 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:47.782 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:47.782 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:47.782 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:47.782 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:47.782 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:47.782 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:47.782 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:47.782 element at address: 0x200000200000 with size: 0.833191 MiB 00:04:47.782 element at address: 0x20001aa00000 with size: 0.568787 MiB 00:04:47.782 element at address: 0x20000b200000 with size: 0.489807 MiB 00:04:47.782 element at address: 0x200000800000 with size: 0.486145 MiB 00:04:47.782 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:47.782 element at address: 0x200027e00000 with size: 0.395752 MiB 00:04:47.782 element at address: 0x200003a00000 with size: 0.347839 MiB 00:04:47.782 list of standard malloc elements. size: 199.265137 MiB 00:04:47.782 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:47.782 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:47.782 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:47.782 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:47.782 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:47.782 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:47.782 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:47.782 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:47.782 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:47.782 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:47.782 element at address: 0x20000087c740 with size: 0.000183 MiB 00:04:47.782 element at address: 0x20000087c800 with size: 0.000183 MiB 00:04:47.782 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:04:47.782 element at address: 0x20000087c980 with size: 0.000183 MiB 00:04:47.782 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:47.782 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:47.782 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:47.782 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:47.782 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:47.782 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:47.782 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:47.782 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:47.782 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:04:47.782 element at address: 0x200003a59180 with size: 0.000183 MiB 00:04:47.782 element at address: 0x200003a59240 with size: 0.000183 MiB 00:04:47.782 element at address: 0x200003a59300 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a59480 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a59540 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a59600 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a59780 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a59840 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a59900 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:47.783 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:47.783 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:47.783 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:47.783 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200027e65500 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:47.783 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:47.784 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:47.784 list of memzone associated elements. size: 602.262573 MiB 00:04:47.784 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:47.784 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:47.784 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:47.784 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:47.784 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:47.784 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59908_0 00:04:47.784 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:47.784 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59908_0 00:04:47.784 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:47.784 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59908_0 00:04:47.784 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:47.784 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:47.784 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:47.784 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:47.784 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:47.784 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59908 00:04:47.784 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:47.784 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59908 00:04:47.784 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:47.784 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59908 00:04:47.784 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:47.784 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:47.784 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:47.784 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:47.784 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:47.784 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:47.784 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:47.784 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:47.784 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:47.784 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59908 00:04:47.784 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:47.784 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59908 00:04:47.784 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:47.784 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59908 00:04:47.784 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:47.784 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59908 00:04:47.784 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:47.784 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59908 00:04:47.784 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:47.784 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:47.784 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:47.784 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:47.784 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:47.784 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:47.784 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:47.784 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59908 00:04:47.784 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:47.784 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:47.784 element at address: 0x200027e65680 with size: 0.023743 MiB 00:04:47.784 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:47.784 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:47.784 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59908 00:04:47.784 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:04:47.784 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:47.784 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:04:47.784 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59908 00:04:47.784 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:47.784 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59908 00:04:47.784 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:04:47.784 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:47.784 22:16:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:47.784 22:16:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59908 00:04:47.784 22:16:01 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 59908 ']' 00:04:47.784 22:16:01 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 59908 00:04:47.784 22:16:01 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:47.784 22:16:01 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:47.784 22:16:01 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59908 00:04:47.784 22:16:01 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:47.784 killing process with pid 59908 00:04:47.784 22:16:01 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:47.784 22:16:01 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59908' 00:04:47.784 22:16:01 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 59908 00:04:47.784 22:16:01 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 59908 00:04:48.041 00:04:48.041 real 0m1.529s 00:04:48.041 user 0m1.536s 00:04:48.041 sys 0m0.438s 00:04:48.041 22:16:01 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.041 ************************************ 00:04:48.041 END TEST dpdk_mem_utility 00:04:48.041 ************************************ 00:04:48.041 22:16:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:48.300 22:16:01 -- common/autotest_common.sh@1142 -- # return 0 00:04:48.300 22:16:01 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:48.300 22:16:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.300 22:16:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.300 22:16:01 -- common/autotest_common.sh@10 -- # set +x 00:04:48.300 ************************************ 00:04:48.300 START TEST event 00:04:48.300 ************************************ 00:04:48.300 22:16:01 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:48.300 * Looking for test storage... 00:04:48.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:48.300 22:16:01 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:48.300 22:16:01 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:48.300 22:16:01 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:48.300 22:16:01 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:48.300 22:16:01 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.300 22:16:01 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.300 ************************************ 00:04:48.300 START TEST event_perf 00:04:48.300 ************************************ 00:04:48.300 22:16:01 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:48.300 Running I/O for 1 seconds...[2024-07-15 22:16:01.859684] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:04:48.300 [2024-07-15 22:16:01.859798] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59984 ] 00:04:48.558 [2024-07-15 22:16:02.005928] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:48.558 [2024-07-15 22:16:02.107583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.558 [2024-07-15 22:16:02.107660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.558 Running I/O for 1 seconds...[2024-07-15 22:16:02.107843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.558 [2024-07-15 22:16:02.107844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:49.933 00:04:49.933 lcore 0: 192163 00:04:49.933 lcore 1: 192161 00:04:49.933 lcore 2: 192161 00:04:49.933 lcore 3: 192162 00:04:49.933 done. 00:04:49.933 00:04:49.933 ************************************ 00:04:49.933 END TEST event_perf 00:04:49.933 ************************************ 00:04:49.933 real 0m1.357s 00:04:49.933 user 0m4.157s 00:04:49.933 sys 0m0.069s 00:04:49.933 22:16:03 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.933 22:16:03 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:49.933 22:16:03 event -- common/autotest_common.sh@1142 -- # return 0 00:04:49.933 22:16:03 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:49.933 22:16:03 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:49.933 22:16:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.933 22:16:03 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.933 ************************************ 00:04:49.933 START TEST event_reactor 00:04:49.933 ************************************ 00:04:49.933 22:16:03 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:49.933 [2024-07-15 22:16:03.292526] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:04:49.933 [2024-07-15 22:16:03.292679] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60017 ] 00:04:49.933 [2024-07-15 22:16:03.438065] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.933 [2024-07-15 22:16:03.537015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.313 test_start 00:04:51.313 oneshot 00:04:51.313 tick 100 00:04:51.313 tick 100 00:04:51.313 tick 250 00:04:51.313 tick 100 00:04:51.313 tick 100 00:04:51.313 tick 250 00:04:51.313 tick 100 00:04:51.313 tick 500 00:04:51.313 tick 100 00:04:51.313 tick 100 00:04:51.313 tick 250 00:04:51.313 tick 100 00:04:51.313 tick 100 00:04:51.313 test_end 00:04:51.313 00:04:51.313 real 0m1.342s 00:04:51.313 user 0m1.177s 00:04:51.313 sys 0m0.057s 00:04:51.313 22:16:04 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.313 ************************************ 00:04:51.313 END TEST event_reactor 00:04:51.313 ************************************ 00:04:51.313 22:16:04 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:51.313 22:16:04 event -- common/autotest_common.sh@1142 -- # return 0 00:04:51.313 22:16:04 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:51.313 22:16:04 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:51.313 22:16:04 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.313 22:16:04 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.313 ************************************ 00:04:51.313 START TEST event_reactor_perf 00:04:51.313 ************************************ 00:04:51.313 22:16:04 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:51.313 [2024-07-15 22:16:04.705799] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:04:51.313 [2024-07-15 22:16:04.705910] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60058 ] 00:04:51.313 [2024-07-15 22:16:04.850156] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.571 [2024-07-15 22:16:04.952087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.506 test_start 00:04:52.506 test_end 00:04:52.506 Performance: 431228 events per second 00:04:52.506 00:04:52.506 real 0m1.350s 00:04:52.506 user 0m1.188s 00:04:52.506 sys 0m0.056s 00:04:52.506 22:16:06 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.506 22:16:06 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:52.506 ************************************ 00:04:52.506 END TEST event_reactor_perf 00:04:52.506 ************************************ 00:04:52.506 22:16:06 event -- common/autotest_common.sh@1142 -- # return 0 00:04:52.506 22:16:06 event -- event/event.sh@49 -- # uname -s 00:04:52.506 22:16:06 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:52.506 22:16:06 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:52.506 22:16:06 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.506 22:16:06 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.506 22:16:06 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.506 ************************************ 00:04:52.506 START TEST event_scheduler 00:04:52.506 ************************************ 00:04:52.506 22:16:06 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:52.764 * Looking for test storage... 00:04:52.764 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:52.764 22:16:06 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:52.764 22:16:06 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60114 00:04:52.764 22:16:06 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:52.764 22:16:06 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.764 22:16:06 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60114 00:04:52.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.764 22:16:06 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 60114 ']' 00:04:52.764 22:16:06 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.764 22:16:06 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.764 22:16:06 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.764 22:16:06 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.764 22:16:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:52.764 [2024-07-15 22:16:06.285795] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:04:52.764 [2024-07-15 22:16:06.285875] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60114 ] 00:04:53.022 [2024-07-15 22:16:06.419746] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:53.022 [2024-07-15 22:16:06.520809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.022 [2024-07-15 22:16:06.520990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.022 [2024-07-15 22:16:06.521061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:53.022 [2024-07-15 22:16:06.521062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:53.589 22:16:07 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:53.589 22:16:07 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:53.589 22:16:07 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:53.589 22:16:07 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.589 22:16:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:53.589 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:53.589 POWER: Cannot set governor of lcore 0 to userspace 00:04:53.589 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:53.589 POWER: Cannot set governor of lcore 0 to performance 00:04:53.590 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:53.590 POWER: Cannot set governor of lcore 0 to userspace 00:04:53.590 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:53.590 POWER: Cannot set governor of lcore 0 to userspace 00:04:53.590 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:53.590 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:53.590 POWER: Unable to set Power Management Environment for lcore 0 00:04:53.590 [2024-07-15 22:16:07.182735] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:53.590 [2024-07-15 22:16:07.182776] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:53.590 [2024-07-15 22:16:07.182807] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:53.590 [2024-07-15 22:16:07.182880] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:53.590 [2024-07-15 22:16:07.182915] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:53.590 [2024-07-15 22:16:07.182945] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:53.590 22:16:07 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.590 22:16:07 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:53.590 22:16:07 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.590 22:16:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:53.848 [2024-07-15 22:16:07.230966] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:53.848 [2024-07-15 22:16:07.259048] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:53.848 22:16:07 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.848 22:16:07 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:53.848 22:16:07 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.848 22:16:07 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.848 22:16:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:53.848 ************************************ 00:04:53.848 START TEST scheduler_create_thread 00:04:53.848 ************************************ 00:04:53.848 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:53.848 22:16:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:53.848 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.848 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.848 2 00:04:53.848 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.848 22:16:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:53.848 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.848 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.848 3 00:04:53.848 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.848 22:16:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:53.848 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.848 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.848 4 00:04:53.848 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.848 22:16:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:53.848 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.848 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.848 5 00:04:53.848 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.848 22:16:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:53.848 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.848 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.848 6 00:04:53.848 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.848 22:16:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:53.848 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.848 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.848 7 00:04:53.848 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.849 22:16:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:53.849 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.849 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.849 8 00:04:53.849 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.849 22:16:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:53.849 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.849 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.849 9 00:04:53.849 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.849 22:16:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:53.849 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.849 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.849 10 00:04:53.849 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.849 22:16:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:53.849 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.849 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.416 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.416 22:16:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:54.416 22:16:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:54.416 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.416 22:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.352 22:16:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.352 22:16:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:55.352 22:16:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.352 22:16:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.287 22:16:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.287 22:16:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:56.287 22:16:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:56.287 22:16:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.287 22:16:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.223 ************************************ 00:04:57.223 END TEST scheduler_create_thread 00:04:57.223 ************************************ 00:04:57.223 22:16:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.223 00:04:57.223 real 0m3.225s 00:04:57.223 user 0m0.022s 00:04:57.223 sys 0m0.011s 00:04:57.223 22:16:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.223 22:16:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.223 22:16:10 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:57.223 22:16:10 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:57.223 22:16:10 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60114 00:04:57.223 22:16:10 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 60114 ']' 00:04:57.223 22:16:10 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 60114 00:04:57.223 22:16:10 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:57.223 22:16:10 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:57.223 22:16:10 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60114 00:04:57.223 killing process with pid 60114 00:04:57.223 22:16:10 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:57.223 22:16:10 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:57.223 22:16:10 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60114' 00:04:57.223 22:16:10 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 60114 00:04:57.223 22:16:10 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 60114 00:04:57.482 [2024-07-15 22:16:10.879783] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:57.740 ************************************ 00:04:57.740 END TEST event_scheduler 00:04:57.740 ************************************ 00:04:57.740 00:04:57.740 real 0m5.217s 00:04:57.740 user 0m10.422s 00:04:57.740 sys 0m0.423s 00:04:57.740 22:16:11 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.740 22:16:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:58.028 22:16:11 event -- common/autotest_common.sh@1142 -- # return 0 00:04:58.028 22:16:11 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:58.028 22:16:11 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:58.028 22:16:11 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.028 22:16:11 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.028 22:16:11 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.028 ************************************ 00:04:58.028 START TEST app_repeat 00:04:58.028 ************************************ 00:04:58.028 22:16:11 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:58.028 22:16:11 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.028 22:16:11 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.028 22:16:11 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:58.028 22:16:11 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.028 22:16:11 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:58.028 22:16:11 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:58.028 22:16:11 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:58.028 22:16:11 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60219 00:04:58.028 22:16:11 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:58.028 22:16:11 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.028 22:16:11 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60219' 00:04:58.028 Process app_repeat pid: 60219 00:04:58.028 22:16:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:58.028 22:16:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:58.028 spdk_app_start Round 0 00:04:58.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:58.028 22:16:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60219 /var/tmp/spdk-nbd.sock 00:04:58.028 22:16:11 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60219 ']' 00:04:58.028 22:16:11 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:58.028 22:16:11 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.028 22:16:11 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:58.028 22:16:11 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.028 22:16:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:58.028 [2024-07-15 22:16:11.450519] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:04:58.028 [2024-07-15 22:16:11.450621] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60219 ] 00:04:58.028 [2024-07-15 22:16:11.594172] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:58.287 [2024-07-15 22:16:11.695164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.287 [2024-07-15 22:16:11.695166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.287 [2024-07-15 22:16:11.739107] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:58.854 22:16:12 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.854 22:16:12 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:58.854 22:16:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.112 Malloc0 00:04:59.112 22:16:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.371 Malloc1 00:04:59.371 22:16:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.371 22:16:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.371 22:16:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.371 22:16:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:59.371 22:16:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.371 22:16:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:59.371 22:16:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.371 22:16:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.371 22:16:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.371 22:16:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:59.371 22:16:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.371 22:16:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:59.371 22:16:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:59.371 22:16:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:59.371 22:16:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.371 22:16:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:59.630 /dev/nbd0 00:04:59.630 22:16:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:59.630 22:16:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:59.630 22:16:13 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:59.630 22:16:13 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:59.630 22:16:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:59.630 22:16:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:59.630 22:16:13 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:59.630 22:16:13 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:59.630 22:16:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:59.630 22:16:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:59.630 22:16:13 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.630 1+0 records in 00:04:59.630 1+0 records out 00:04:59.630 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383875 s, 10.7 MB/s 00:04:59.630 22:16:13 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.630 22:16:13 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:59.630 22:16:13 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.630 22:16:13 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:59.630 22:16:13 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:59.630 22:16:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.630 22:16:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.630 22:16:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:59.889 /dev/nbd1 00:04:59.889 22:16:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:59.889 22:16:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:59.889 22:16:13 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:59.889 22:16:13 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:59.889 22:16:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:59.889 22:16:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:59.889 22:16:13 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:59.889 22:16:13 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:59.889 22:16:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:59.889 22:16:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:59.889 22:16:13 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.889 1+0 records in 00:04:59.889 1+0 records out 00:04:59.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373785 s, 11.0 MB/s 00:04:59.889 22:16:13 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.889 22:16:13 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:59.889 22:16:13 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.889 22:16:13 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:59.889 22:16:13 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:59.889 22:16:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.889 22:16:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.889 22:16:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.889 22:16:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.889 22:16:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:00.148 { 00:05:00.148 "nbd_device": "/dev/nbd0", 00:05:00.148 "bdev_name": "Malloc0" 00:05:00.148 }, 00:05:00.148 { 00:05:00.148 "nbd_device": "/dev/nbd1", 00:05:00.148 "bdev_name": "Malloc1" 00:05:00.148 } 00:05:00.148 ]' 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:00.148 { 00:05:00.148 "nbd_device": "/dev/nbd0", 00:05:00.148 "bdev_name": "Malloc0" 00:05:00.148 }, 00:05:00.148 { 00:05:00.148 "nbd_device": "/dev/nbd1", 00:05:00.148 "bdev_name": "Malloc1" 00:05:00.148 } 00:05:00.148 ]' 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:00.148 /dev/nbd1' 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:00.148 /dev/nbd1' 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:00.148 256+0 records in 00:05:00.148 256+0 records out 00:05:00.148 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126537 s, 82.9 MB/s 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:00.148 256+0 records in 00:05:00.148 256+0 records out 00:05:00.148 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280509 s, 37.4 MB/s 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:00.148 256+0 records in 00:05:00.148 256+0 records out 00:05:00.148 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282225 s, 37.2 MB/s 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.148 22:16:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:00.406 22:16:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:00.406 22:16:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:00.406 22:16:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:00.406 22:16:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.406 22:16:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.406 22:16:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:00.406 22:16:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.406 22:16:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.406 22:16:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.406 22:16:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:00.663 22:16:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:00.663 22:16:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:00.663 22:16:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:00.663 22:16:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.663 22:16:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.663 22:16:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:00.663 22:16:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.663 22:16:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.663 22:16:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.663 22:16:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.663 22:16:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.920 22:16:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:00.920 22:16:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.920 22:16:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:00.920 22:16:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:00.920 22:16:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:00.920 22:16:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.920 22:16:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:00.920 22:16:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:00.921 22:16:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:00.921 22:16:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:00.921 22:16:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:00.921 22:16:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:00.921 22:16:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:01.193 22:16:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:01.450 [2024-07-15 22:16:14.881820] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.450 [2024-07-15 22:16:14.992397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.450 [2024-07-15 22:16:14.992400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.450 [2024-07-15 22:16:15.037989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:01.450 [2024-07-15 22:16:15.038062] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:01.450 [2024-07-15 22:16:15.038074] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:04.785 22:16:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:04.785 spdk_app_start Round 1 00:05:04.785 22:16:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:04.785 22:16:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60219 /var/tmp/spdk-nbd.sock 00:05:04.785 22:16:17 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60219 ']' 00:05:04.785 22:16:17 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:04.785 22:16:17 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:04.785 22:16:17 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:04.785 22:16:17 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.785 22:16:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:04.785 22:16:17 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.785 22:16:17 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:04.785 22:16:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.785 Malloc0 00:05:04.785 22:16:18 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.785 Malloc1 00:05:04.785 22:16:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.785 22:16:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.785 22:16:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.785 22:16:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:04.785 22:16:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.785 22:16:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:04.785 22:16:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.785 22:16:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.785 22:16:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.785 22:16:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:04.785 22:16:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.785 22:16:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:04.785 22:16:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:04.785 22:16:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:04.785 22:16:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.785 22:16:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:05.043 /dev/nbd0 00:05:05.043 22:16:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:05.043 22:16:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:05.043 22:16:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:05.043 22:16:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:05.043 22:16:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:05.043 22:16:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:05.043 22:16:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:05.043 22:16:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:05.043 22:16:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:05.043 22:16:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:05.043 22:16:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.043 1+0 records in 00:05:05.043 1+0 records out 00:05:05.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238229 s, 17.2 MB/s 00:05:05.043 22:16:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.043 22:16:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:05.043 22:16:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.043 22:16:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:05.043 22:16:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:05.043 22:16:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.043 22:16:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.043 22:16:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:05.302 /dev/nbd1 00:05:05.302 22:16:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:05.302 22:16:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:05.302 22:16:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:05.302 22:16:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:05.302 22:16:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:05.302 22:16:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:05.302 22:16:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:05.302 22:16:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:05.302 22:16:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:05.302 22:16:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:05.302 22:16:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.302 1+0 records in 00:05:05.302 1+0 records out 00:05:05.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333995 s, 12.3 MB/s 00:05:05.302 22:16:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.302 22:16:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:05.302 22:16:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.302 22:16:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:05.302 22:16:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:05.302 22:16:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.302 22:16:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.302 22:16:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.302 22:16:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.302 22:16:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:05.559 { 00:05:05.559 "nbd_device": "/dev/nbd0", 00:05:05.559 "bdev_name": "Malloc0" 00:05:05.559 }, 00:05:05.559 { 00:05:05.559 "nbd_device": "/dev/nbd1", 00:05:05.559 "bdev_name": "Malloc1" 00:05:05.559 } 00:05:05.559 ]' 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:05.559 { 00:05:05.559 "nbd_device": "/dev/nbd0", 00:05:05.559 "bdev_name": "Malloc0" 00:05:05.559 }, 00:05:05.559 { 00:05:05.559 "nbd_device": "/dev/nbd1", 00:05:05.559 "bdev_name": "Malloc1" 00:05:05.559 } 00:05:05.559 ]' 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:05.559 /dev/nbd1' 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:05.559 /dev/nbd1' 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:05.559 256+0 records in 00:05:05.559 256+0 records out 00:05:05.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112433 s, 93.3 MB/s 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:05.559 256+0 records in 00:05:05.559 256+0 records out 00:05:05.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256004 s, 41.0 MB/s 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:05.559 256+0 records in 00:05:05.559 256+0 records out 00:05:05.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268738 s, 39.0 MB/s 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.559 22:16:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:05.818 22:16:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:05.818 22:16:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:05.818 22:16:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:05.818 22:16:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.818 22:16:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.818 22:16:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:06.076 22:16:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.076 22:16:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.076 22:16:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.076 22:16:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:06.076 22:16:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:06.076 22:16:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:06.076 22:16:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:06.076 22:16:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.076 22:16:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.076 22:16:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:06.076 22:16:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.076 22:16:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.076 22:16:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.076 22:16:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.076 22:16:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.335 22:16:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:06.335 22:16:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:06.335 22:16:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.335 22:16:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:06.335 22:16:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:06.335 22:16:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.335 22:16:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:06.335 22:16:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:06.335 22:16:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:06.335 22:16:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:06.335 22:16:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:06.335 22:16:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:06.335 22:16:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:06.594 22:16:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:06.852 [2024-07-15 22:16:20.349973] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.852 [2024-07-15 22:16:20.451607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.852 [2024-07-15 22:16:20.451630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.112 [2024-07-15 22:16:20.497195] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:07.112 [2024-07-15 22:16:20.497269] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:07.112 [2024-07-15 22:16:20.497280] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:09.641 22:16:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:09.641 spdk_app_start Round 2 00:05:09.641 22:16:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:09.641 22:16:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60219 /var/tmp/spdk-nbd.sock 00:05:09.641 22:16:23 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60219 ']' 00:05:09.641 22:16:23 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.641 22:16:23 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.641 22:16:23 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.641 22:16:23 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.641 22:16:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.899 22:16:23 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.899 22:16:23 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:09.899 22:16:23 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.158 Malloc0 00:05:10.158 22:16:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.158 Malloc1 00:05:10.416 22:16:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.416 22:16:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.416 22:16:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.416 22:16:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:10.416 22:16:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.416 22:16:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:10.416 22:16:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.416 22:16:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.416 22:16:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.416 22:16:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:10.416 22:16:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.416 22:16:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:10.416 22:16:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:10.416 22:16:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:10.416 22:16:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.416 22:16:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:10.416 /dev/nbd0 00:05:10.416 22:16:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:10.416 22:16:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:10.416 22:16:23 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:10.416 22:16:23 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:10.416 22:16:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:10.416 22:16:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:10.416 22:16:23 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:10.416 22:16:23 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:10.416 22:16:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:10.416 22:16:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:10.416 22:16:23 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.416 1+0 records in 00:05:10.416 1+0 records out 00:05:10.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263215 s, 15.6 MB/s 00:05:10.416 22:16:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.416 22:16:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:10.416 22:16:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.416 22:16:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:10.416 22:16:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:10.416 22:16:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.416 22:16:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.416 22:16:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:10.675 /dev/nbd1 00:05:10.675 22:16:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:10.675 22:16:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:10.675 22:16:24 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:10.675 22:16:24 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:10.675 22:16:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:10.675 22:16:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:10.675 22:16:24 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:10.675 22:16:24 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:10.675 22:16:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:10.675 22:16:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:10.675 22:16:24 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.675 1+0 records in 00:05:10.675 1+0 records out 00:05:10.675 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189283 s, 21.6 MB/s 00:05:10.675 22:16:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.675 22:16:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:10.675 22:16:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.675 22:16:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:10.675 22:16:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:10.675 22:16:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.675 22:16:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.675 22:16:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.675 22:16:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.675 22:16:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.933 22:16:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:10.933 { 00:05:10.933 "nbd_device": "/dev/nbd0", 00:05:10.933 "bdev_name": "Malloc0" 00:05:10.933 }, 00:05:10.933 { 00:05:10.933 "nbd_device": "/dev/nbd1", 00:05:10.933 "bdev_name": "Malloc1" 00:05:10.933 } 00:05:10.933 ]' 00:05:10.933 22:16:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:10.933 { 00:05:10.933 "nbd_device": "/dev/nbd0", 00:05:10.933 "bdev_name": "Malloc0" 00:05:10.933 }, 00:05:10.934 { 00:05:10.934 "nbd_device": "/dev/nbd1", 00:05:10.934 "bdev_name": "Malloc1" 00:05:10.934 } 00:05:10.934 ]' 00:05:10.934 22:16:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.934 22:16:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:10.934 /dev/nbd1' 00:05:10.934 22:16:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:10.934 /dev/nbd1' 00:05:10.934 22:16:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.934 22:16:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:10.934 22:16:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:10.934 22:16:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:10.934 22:16:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:10.934 22:16:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:10.934 22:16:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.934 22:16:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.934 22:16:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:10.934 22:16:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.934 22:16:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:10.934 22:16:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:10.934 256+0 records in 00:05:10.934 256+0 records out 00:05:10.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113338 s, 92.5 MB/s 00:05:10.934 22:16:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.934 22:16:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:10.934 256+0 records in 00:05:10.934 256+0 records out 00:05:10.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209373 s, 50.1 MB/s 00:05:10.934 22:16:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.934 22:16:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:11.191 256+0 records in 00:05:11.191 256+0 records out 00:05:11.191 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272053 s, 38.5 MB/s 00:05:11.191 22:16:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:11.191 22:16:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.191 22:16:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.191 22:16:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:11.191 22:16:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.191 22:16:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:11.191 22:16:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:11.191 22:16:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.191 22:16:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:11.191 22:16:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.191 22:16:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:11.191 22:16:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.191 22:16:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:11.191 22:16:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.191 22:16:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.191 22:16:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:11.191 22:16:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:11.191 22:16:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.192 22:16:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:11.192 22:16:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:11.192 22:16:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:11.192 22:16:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:11.192 22:16:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.192 22:16:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.192 22:16:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:11.192 22:16:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.192 22:16:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.192 22:16:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.192 22:16:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:11.450 22:16:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:11.450 22:16:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:11.450 22:16:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:11.450 22:16:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.450 22:16:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.450 22:16:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:11.450 22:16:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.450 22:16:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.450 22:16:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.450 22:16:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.450 22:16:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.707 22:16:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:11.707 22:16:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:11.707 22:16:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.707 22:16:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:11.707 22:16:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:11.707 22:16:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.707 22:16:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:11.707 22:16:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:11.707 22:16:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:11.707 22:16:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:11.707 22:16:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:11.707 22:16:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:11.707 22:16:25 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:11.965 22:16:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:12.224 [2024-07-15 22:16:25.664242] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.224 [2024-07-15 22:16:25.762673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.224 [2024-07-15 22:16:25.762675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.224 [2024-07-15 22:16:25.806316] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:12.224 [2024-07-15 22:16:25.806409] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:12.224 [2024-07-15 22:16:25.806421] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:15.589 22:16:28 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60219 /var/tmp/spdk-nbd.sock 00:05:15.589 22:16:28 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60219 ']' 00:05:15.589 22:16:28 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.589 22:16:28 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.589 22:16:28 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.589 22:16:28 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.589 22:16:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:15.589 22:16:28 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.589 22:16:28 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:15.589 22:16:28 event.app_repeat -- event/event.sh@39 -- # killprocess 60219 00:05:15.589 22:16:28 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 60219 ']' 00:05:15.589 22:16:28 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 60219 00:05:15.589 22:16:28 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:15.589 22:16:28 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:15.589 22:16:28 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60219 00:05:15.589 22:16:28 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:15.589 killing process with pid 60219 00:05:15.589 22:16:28 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:15.589 22:16:28 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60219' 00:05:15.589 22:16:28 event.app_repeat -- common/autotest_common.sh@967 -- # kill 60219 00:05:15.589 22:16:28 event.app_repeat -- common/autotest_common.sh@972 -- # wait 60219 00:05:15.589 spdk_app_start is called in Round 0. 00:05:15.589 Shutdown signal received, stop current app iteration 00:05:15.590 Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 reinitialization... 00:05:15.590 spdk_app_start is called in Round 1. 00:05:15.590 Shutdown signal received, stop current app iteration 00:05:15.590 Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 reinitialization... 00:05:15.590 spdk_app_start is called in Round 2. 00:05:15.590 Shutdown signal received, stop current app iteration 00:05:15.590 Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 reinitialization... 00:05:15.590 spdk_app_start is called in Round 3. 00:05:15.590 Shutdown signal received, stop current app iteration 00:05:15.590 22:16:28 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:15.590 22:16:28 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:15.590 00:05:15.590 real 0m17.511s 00:05:15.590 user 0m37.995s 00:05:15.590 sys 0m3.070s 00:05:15.590 22:16:28 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.590 ************************************ 00:05:15.590 END TEST app_repeat 00:05:15.590 ************************************ 00:05:15.590 22:16:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:15.590 22:16:28 event -- common/autotest_common.sh@1142 -- # return 0 00:05:15.590 22:16:28 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:15.590 22:16:28 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:15.590 22:16:28 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.590 22:16:28 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.590 22:16:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.590 ************************************ 00:05:15.590 START TEST cpu_locks 00:05:15.590 ************************************ 00:05:15.590 22:16:28 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:15.590 * Looking for test storage... 00:05:15.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:15.590 22:16:29 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:15.590 22:16:29 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:15.590 22:16:29 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:15.590 22:16:29 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:15.590 22:16:29 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.590 22:16:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.590 22:16:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.590 ************************************ 00:05:15.590 START TEST default_locks 00:05:15.590 ************************************ 00:05:15.590 22:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:15.590 22:16:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60635 00:05:15.590 22:16:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60635 00:05:15.590 22:16:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.590 22:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60635 ']' 00:05:15.590 22:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.590 22:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.590 22:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.590 22:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.590 22:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.590 [2024-07-15 22:16:29.188058] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:15.590 [2024-07-15 22:16:29.188136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60635 ] 00:05:15.848 [2024-07-15 22:16:29.330844] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.848 [2024-07-15 22:16:29.429392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.848 [2024-07-15 22:16:29.471593] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:16.415 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.415 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:16.415 22:16:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60635 00:05:16.415 22:16:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60635 00:05:16.415 22:16:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.981 22:16:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60635 00:05:16.981 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 60635 ']' 00:05:16.981 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 60635 00:05:16.981 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:16.981 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.981 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60635 00:05:16.981 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:16.981 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:16.981 killing process with pid 60635 00:05:16.981 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60635' 00:05:16.981 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 60635 00:05:16.981 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 60635 00:05:17.240 22:16:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60635 00:05:17.240 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:17.240 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60635 00:05:17.240 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:17.240 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.240 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:17.240 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.240 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 60635 00:05:17.240 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60635 ']' 00:05:17.240 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.240 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.240 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.240 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.240 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.240 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60635) - No such process 00:05:17.240 ERROR: process (pid: 60635) is no longer running 00:05:17.240 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.240 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:17.240 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:17.240 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:17.240 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:17.240 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:17.240 22:16:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:17.240 22:16:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:17.240 22:16:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:17.240 22:16:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:17.240 00:05:17.240 real 0m1.721s 00:05:17.240 user 0m1.791s 00:05:17.240 sys 0m0.537s 00:05:17.240 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.240 22:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.240 ************************************ 00:05:17.240 END TEST default_locks 00:05:17.240 ************************************ 00:05:17.498 22:16:30 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:17.498 22:16:30 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:17.498 22:16:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.498 22:16:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.498 22:16:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.498 ************************************ 00:05:17.498 START TEST default_locks_via_rpc 00:05:17.498 ************************************ 00:05:17.498 22:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:17.498 22:16:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60687 00:05:17.498 22:16:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.498 22:16:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60687 00:05:17.498 22:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60687 ']' 00:05:17.498 22:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.498 22:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.498 22:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.498 22:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.498 22:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.498 [2024-07-15 22:16:30.972168] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:17.498 [2024-07-15 22:16:30.972245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60687 ] 00:05:17.498 [2024-07-15 22:16:31.102983] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.757 [2024-07-15 22:16:31.210312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.757 [2024-07-15 22:16:31.254400] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:18.323 22:16:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.323 22:16:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:18.323 22:16:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:18.323 22:16:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.323 22:16:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.323 22:16:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.323 22:16:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:18.323 22:16:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:18.323 22:16:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:18.323 22:16:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:18.323 22:16:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:18.323 22:16:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.323 22:16:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.323 22:16:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.323 22:16:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60687 00:05:18.323 22:16:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60687 00:05:18.323 22:16:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:18.960 22:16:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60687 00:05:18.960 22:16:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 60687 ']' 00:05:18.960 22:16:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 60687 00:05:18.960 22:16:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:18.960 22:16:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.960 22:16:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60687 00:05:18.960 22:16:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:18.960 22:16:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:18.960 killing process with pid 60687 00:05:18.960 22:16:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60687' 00:05:18.960 22:16:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 60687 00:05:18.960 22:16:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 60687 00:05:19.218 00:05:19.218 real 0m1.804s 00:05:19.218 user 0m1.920s 00:05:19.218 sys 0m0.554s 00:05:19.218 22:16:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.218 22:16:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.218 ************************************ 00:05:19.218 END TEST default_locks_via_rpc 00:05:19.218 ************************************ 00:05:19.218 22:16:32 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:19.218 22:16:32 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:19.218 22:16:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.218 22:16:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.218 22:16:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.218 ************************************ 00:05:19.218 START TEST non_locking_app_on_locked_coremask 00:05:19.218 ************************************ 00:05:19.218 22:16:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:19.218 22:16:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60733 00:05:19.218 22:16:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.218 22:16:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60733 /var/tmp/spdk.sock 00:05:19.218 22:16:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60733 ']' 00:05:19.218 22:16:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.218 22:16:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.218 22:16:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.218 22:16:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.218 22:16:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.476 [2024-07-15 22:16:32.861036] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:19.476 [2024-07-15 22:16:32.861112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60733 ] 00:05:19.476 [2024-07-15 22:16:32.991534] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.476 [2024-07-15 22:16:33.092538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.734 [2024-07-15 22:16:33.138918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:20.300 22:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.300 22:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:20.300 22:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:20.300 22:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60749 00:05:20.300 22:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60749 /var/tmp/spdk2.sock 00:05:20.300 22:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60749 ']' 00:05:20.300 22:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.300 22:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.300 22:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.300 22:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.300 22:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.300 [2024-07-15 22:16:33.809138] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:20.300 [2024-07-15 22:16:33.809248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60749 ] 00:05:20.558 [2024-07-15 22:16:33.948367] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:20.558 [2024-07-15 22:16:33.948419] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.558 [2024-07-15 22:16:34.155794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.815 [2024-07-15 22:16:34.238835] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:21.073 22:16:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.073 22:16:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:21.073 22:16:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60733 00:05:21.073 22:16:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60733 00:05:21.073 22:16:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.448 22:16:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60733 00:05:22.448 22:16:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60733 ']' 00:05:22.448 22:16:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60733 00:05:22.448 22:16:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:22.448 22:16:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:22.448 22:16:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60733 00:05:22.448 22:16:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:22.448 killing process with pid 60733 00:05:22.448 22:16:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:22.448 22:16:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60733' 00:05:22.448 22:16:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60733 00:05:22.448 22:16:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60733 00:05:23.016 22:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60749 00:05:23.016 22:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60749 ']' 00:05:23.016 22:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60749 00:05:23.016 22:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:23.016 22:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.016 22:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60749 00:05:23.016 22:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.016 22:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.016 22:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60749' 00:05:23.016 killing process with pid 60749 00:05:23.016 22:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60749 00:05:23.016 22:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60749 00:05:23.275 00:05:23.275 real 0m3.944s 00:05:23.275 user 0m4.327s 00:05:23.275 sys 0m1.123s 00:05:23.275 22:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.275 ************************************ 00:05:23.275 END TEST non_locking_app_on_locked_coremask 00:05:23.275 ************************************ 00:05:23.275 22:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.275 22:16:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:23.275 22:16:36 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:23.275 22:16:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.275 22:16:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.275 22:16:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.275 ************************************ 00:05:23.275 START TEST locking_app_on_unlocked_coremask 00:05:23.275 ************************************ 00:05:23.275 22:16:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:23.276 22:16:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60816 00:05:23.276 22:16:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60816 /var/tmp/spdk.sock 00:05:23.276 22:16:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:23.276 22:16:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60816 ']' 00:05:23.276 22:16:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.276 22:16:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.276 22:16:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.276 22:16:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.276 22:16:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.276 [2024-07-15 22:16:36.881770] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:23.276 [2024-07-15 22:16:36.881859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60816 ] 00:05:23.535 [2024-07-15 22:16:37.021510] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.535 [2024-07-15 22:16:37.021584] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.535 [2024-07-15 22:16:37.121106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.535 [2024-07-15 22:16:37.163193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:24.103 22:16:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.391 22:16:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:24.391 22:16:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60832 00:05:24.391 22:16:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60832 /var/tmp/spdk2.sock 00:05:24.391 22:16:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60832 ']' 00:05:24.391 22:16:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.391 22:16:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.391 22:16:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.391 22:16:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.391 22:16:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.391 22:16:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:24.391 [2024-07-15 22:16:37.794694] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:24.391 [2024-07-15 22:16:37.794773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60832 ] 00:05:24.391 [2024-07-15 22:16:37.933187] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.649 [2024-07-15 22:16:38.132800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.649 [2024-07-15 22:16:38.214697] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:25.216 22:16:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.216 22:16:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:25.216 22:16:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60832 00:05:25.216 22:16:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.216 22:16:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60832 00:05:26.148 22:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60816 00:05:26.148 22:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60816 ']' 00:05:26.148 22:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60816 00:05:26.148 22:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:26.148 22:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.148 22:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60816 00:05:26.148 22:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.148 22:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.148 killing process with pid 60816 00:05:26.148 22:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60816' 00:05:26.148 22:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60816 00:05:26.148 22:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60816 00:05:26.714 22:16:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60832 00:05:26.714 22:16:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60832 ']' 00:05:26.714 22:16:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60832 00:05:26.714 22:16:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:26.714 22:16:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.714 22:16:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60832 00:05:26.714 22:16:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.714 22:16:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.714 killing process with pid 60832 00:05:26.714 22:16:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60832' 00:05:26.714 22:16:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60832 00:05:26.714 22:16:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60832 00:05:27.318 00:05:27.318 real 0m3.840s 00:05:27.318 user 0m4.163s 00:05:27.318 sys 0m1.111s 00:05:27.318 22:16:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.318 22:16:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.318 ************************************ 00:05:27.318 END TEST locking_app_on_unlocked_coremask 00:05:27.318 ************************************ 00:05:27.318 22:16:40 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:27.318 22:16:40 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:27.318 22:16:40 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.318 22:16:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.318 22:16:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.318 ************************************ 00:05:27.318 START TEST locking_app_on_locked_coremask 00:05:27.318 ************************************ 00:05:27.318 22:16:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:27.318 22:16:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60899 00:05:27.318 22:16:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60899 /var/tmp/spdk.sock 00:05:27.318 22:16:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.318 22:16:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60899 ']' 00:05:27.318 22:16:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.318 22:16:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.318 22:16:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.318 22:16:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.318 22:16:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.318 [2024-07-15 22:16:40.785977] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:27.318 [2024-07-15 22:16:40.786051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60899 ] 00:05:27.318 [2024-07-15 22:16:40.925759] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.576 [2024-07-15 22:16:41.024392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.576 [2024-07-15 22:16:41.066235] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:28.144 22:16:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.144 22:16:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:28.144 22:16:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60909 00:05:28.144 22:16:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:28.144 22:16:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60909 /var/tmp/spdk2.sock 00:05:28.144 22:16:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:28.144 22:16:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60909 /var/tmp/spdk2.sock 00:05:28.144 22:16:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:28.144 22:16:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.144 22:16:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:28.144 22:16:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.144 22:16:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60909 /var/tmp/spdk2.sock 00:05:28.144 22:16:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60909 ']' 00:05:28.144 22:16:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.144 22:16:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.144 22:16:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.144 22:16:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.144 22:16:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.144 [2024-07-15 22:16:41.696771] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:28.144 [2024-07-15 22:16:41.696842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60909 ] 00:05:28.403 [2024-07-15 22:16:41.832076] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60899 has claimed it. 00:05:28.403 [2024-07-15 22:16:41.832155] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:28.971 ERROR: process (pid: 60909) is no longer running 00:05:28.971 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60909) - No such process 00:05:28.971 22:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.971 22:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:28.971 22:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:28.971 22:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:28.971 22:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:28.971 22:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:28.971 22:16:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60899 00:05:28.971 22:16:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60899 00:05:28.971 22:16:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.229 22:16:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60899 00:05:29.229 22:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60899 ']' 00:05:29.229 22:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60899 00:05:29.229 22:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:29.229 22:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:29.229 22:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60899 00:05:29.229 22:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:29.229 22:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:29.229 killing process with pid 60899 00:05:29.229 22:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60899' 00:05:29.229 22:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60899 00:05:29.229 22:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60899 00:05:29.488 00:05:29.488 real 0m2.387s 00:05:29.488 user 0m2.657s 00:05:29.488 sys 0m0.599s 00:05:29.488 22:16:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.488 22:16:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.488 ************************************ 00:05:29.488 END TEST locking_app_on_locked_coremask 00:05:29.488 ************************************ 00:05:29.746 22:16:43 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:29.746 22:16:43 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:29.746 22:16:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.746 22:16:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.746 22:16:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.746 ************************************ 00:05:29.746 START TEST locking_overlapped_coremask 00:05:29.746 ************************************ 00:05:29.746 22:16:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:29.746 22:16:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60955 00:05:29.746 22:16:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60955 /var/tmp/spdk.sock 00:05:29.746 22:16:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60955 ']' 00:05:29.746 22:16:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.746 22:16:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.746 22:16:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.746 22:16:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.746 22:16:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.746 22:16:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:29.746 [2024-07-15 22:16:43.244941] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:29.747 [2024-07-15 22:16:43.245019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60955 ] 00:05:30.004 [2024-07-15 22:16:43.390013] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:30.004 [2024-07-15 22:16:43.490743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.004 [2024-07-15 22:16:43.490930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.004 [2024-07-15 22:16:43.490931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.004 [2024-07-15 22:16:43.533724] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:30.571 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.571 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:30.571 22:16:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60973 00:05:30.571 22:16:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60973 /var/tmp/spdk2.sock 00:05:30.571 22:16:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:30.571 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:30.571 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60973 /var/tmp/spdk2.sock 00:05:30.571 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:30.571 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.571 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:30.571 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.571 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60973 /var/tmp/spdk2.sock 00:05:30.571 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60973 ']' 00:05:30.571 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.571 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.571 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.571 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.571 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.571 [2024-07-15 22:16:44.150363] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:30.571 [2024-07-15 22:16:44.150441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60973 ] 00:05:30.830 [2024-07-15 22:16:44.291334] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60955 has claimed it. 00:05:30.830 [2024-07-15 22:16:44.291407] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:31.396 ERROR: process (pid: 60973) is no longer running 00:05:31.396 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60973) - No such process 00:05:31.396 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.396 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:31.396 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:31.396 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:31.396 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:31.396 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:31.396 22:16:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:31.396 22:16:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:31.396 22:16:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:31.396 22:16:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:31.396 22:16:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60955 00:05:31.396 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 60955 ']' 00:05:31.396 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 60955 00:05:31.396 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:31.396 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.396 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60955 00:05:31.396 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:31.396 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:31.396 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60955' 00:05:31.396 killing process with pid 60955 00:05:31.396 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 60955 00:05:31.396 22:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 60955 00:05:31.654 00:05:31.654 real 0m1.996s 00:05:31.654 user 0m5.372s 00:05:31.654 sys 0m0.400s 00:05:31.654 22:16:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.654 ************************************ 00:05:31.654 END TEST locking_overlapped_coremask 00:05:31.654 ************************************ 00:05:31.654 22:16:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.654 22:16:45 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:31.654 22:16:45 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:31.654 22:16:45 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.654 22:16:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.654 22:16:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.654 ************************************ 00:05:31.654 START TEST locking_overlapped_coremask_via_rpc 00:05:31.654 ************************************ 00:05:31.654 22:16:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:31.654 22:16:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61013 00:05:31.654 22:16:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61013 /var/tmp/spdk.sock 00:05:31.654 22:16:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:31.654 22:16:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61013 ']' 00:05:31.654 22:16:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.654 22:16:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.654 22:16:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.654 22:16:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.654 22:16:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.912 [2024-07-15 22:16:45.313005] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:31.912 [2024-07-15 22:16:45.313082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61013 ] 00:05:31.912 [2024-07-15 22:16:45.454998] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:31.912 [2024-07-15 22:16:45.455074] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:32.170 [2024-07-15 22:16:45.556752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.170 [2024-07-15 22:16:45.556948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.170 [2024-07-15 22:16:45.556962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.170 [2024-07-15 22:16:45.623357] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:32.738 22:16:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.738 22:16:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:32.738 22:16:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61031 00:05:32.738 22:16:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61031 /var/tmp/spdk2.sock 00:05:32.738 22:16:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:32.738 22:16:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61031 ']' 00:05:32.738 22:16:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.738 22:16:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.738 22:16:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.738 22:16:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.738 22:16:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.738 [2024-07-15 22:16:46.295200] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:32.738 [2024-07-15 22:16:46.295282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61031 ] 00:05:32.996 [2024-07-15 22:16:46.443621] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:32.996 [2024-07-15 22:16:46.443719] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:33.253 [2024-07-15 22:16:46.776218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:33.253 [2024-07-15 22:16:46.777418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:33.253 [2024-07-15 22:16:46.777424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.510 [2024-07-15 22:16:46.932591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:34.076 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.077 [2024-07-15 22:16:47.456818] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61013 has claimed it. 00:05:34.077 request: 00:05:34.077 { 00:05:34.077 "method": "framework_enable_cpumask_locks", 00:05:34.077 "req_id": 1 00:05:34.077 } 00:05:34.077 Got JSON-RPC error response 00:05:34.077 response: 00:05:34.077 { 00:05:34.077 "code": -32603, 00:05:34.077 "message": "Failed to claim CPU core: 2" 00:05:34.077 } 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61013 /var/tmp/spdk.sock 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61013 ']' 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61031 /var/tmp/spdk2.sock 00:05:34.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61031 ']' 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.077 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.336 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.336 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:34.336 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:34.336 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:34.336 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:34.336 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:34.336 00:05:34.336 real 0m2.640s 00:05:34.336 user 0m1.094s 00:05:34.336 sys 0m0.195s 00:05:34.336 ************************************ 00:05:34.336 END TEST locking_overlapped_coremask_via_rpc 00:05:34.336 ************************************ 00:05:34.336 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.336 22:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.336 22:16:47 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:34.336 22:16:47 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:34.336 22:16:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61013 ]] 00:05:34.336 22:16:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61013 00:05:34.336 22:16:47 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61013 ']' 00:05:34.336 22:16:47 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61013 00:05:34.336 22:16:47 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:34.336 22:16:47 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:34.336 22:16:47 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61013 00:05:34.595 killing process with pid 61013 00:05:34.595 22:16:47 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:34.595 22:16:47 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:34.595 22:16:47 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61013' 00:05:34.595 22:16:47 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 61013 00:05:34.595 22:16:47 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 61013 00:05:35.184 22:16:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61031 ]] 00:05:35.184 22:16:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61031 00:05:35.184 22:16:48 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61031 ']' 00:05:35.184 22:16:48 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61031 00:05:35.184 22:16:48 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:35.184 22:16:48 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.184 22:16:48 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61031 00:05:35.184 killing process with pid 61031 00:05:35.184 22:16:48 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:35.184 22:16:48 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:35.184 22:16:48 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61031' 00:05:35.184 22:16:48 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 61031 00:05:35.184 22:16:48 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 61031 00:05:35.751 22:16:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:35.751 22:16:49 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:35.751 22:16:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61013 ]] 00:05:35.751 22:16:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61013 00:05:35.751 22:16:49 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61013 ']' 00:05:35.751 22:16:49 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61013 00:05:35.751 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (61013) - No such process 00:05:35.751 Process with pid 61013 is not found 00:05:35.751 Process with pid 61031 is not found 00:05:35.751 22:16:49 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 61013 is not found' 00:05:35.751 22:16:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61031 ]] 00:05:35.751 22:16:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61031 00:05:35.751 22:16:49 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61031 ']' 00:05:35.751 22:16:49 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61031 00:05:35.751 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (61031) - No such process 00:05:35.751 22:16:49 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 61031 is not found' 00:05:35.751 22:16:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:35.751 00:05:35.751 real 0m20.281s 00:05:35.751 user 0m35.450s 00:05:35.751 sys 0m5.673s 00:05:35.751 22:16:49 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.751 22:16:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.751 ************************************ 00:05:35.751 END TEST cpu_locks 00:05:35.751 ************************************ 00:05:35.751 22:16:49 event -- common/autotest_common.sh@1142 -- # return 0 00:05:35.751 00:05:35.751 real 0m47.644s 00:05:35.751 user 1m30.567s 00:05:35.751 sys 0m9.745s 00:05:35.751 22:16:49 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.751 22:16:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.751 ************************************ 00:05:35.752 END TEST event 00:05:35.752 ************************************ 00:05:36.010 22:16:49 -- common/autotest_common.sh@1142 -- # return 0 00:05:36.010 22:16:49 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:36.010 22:16:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.010 22:16:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.010 22:16:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.010 ************************************ 00:05:36.010 START TEST thread 00:05:36.010 ************************************ 00:05:36.010 22:16:49 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:36.010 * Looking for test storage... 00:05:36.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:36.011 22:16:49 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:36.011 22:16:49 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:36.011 22:16:49 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.011 22:16:49 thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.011 ************************************ 00:05:36.011 START TEST thread_poller_perf 00:05:36.011 ************************************ 00:05:36.011 22:16:49 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:36.011 [2024-07-15 22:16:49.569220] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:36.011 [2024-07-15 22:16:49.569341] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61159 ] 00:05:36.269 [2024-07-15 22:16:49.701984] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.269 [2024-07-15 22:16:49.862606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.269 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:37.647 ====================================== 00:05:37.647 busy:2499167618 (cyc) 00:05:37.647 total_run_count: 382000 00:05:37.647 tsc_hz: 2490000000 (cyc) 00:05:37.647 ====================================== 00:05:37.647 poller_cost: 6542 (cyc), 2627 (nsec) 00:05:37.647 00:05:37.647 real 0m1.449s 00:05:37.647 user 0m1.259s 00:05:37.647 sys 0m0.082s 00:05:37.647 22:16:50 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.647 22:16:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:37.647 ************************************ 00:05:37.647 END TEST thread_poller_perf 00:05:37.647 ************************************ 00:05:37.647 22:16:51 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:37.647 22:16:51 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:37.647 22:16:51 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:37.647 22:16:51 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.647 22:16:51 thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.647 ************************************ 00:05:37.647 START TEST thread_poller_perf 00:05:37.647 ************************************ 00:05:37.647 22:16:51 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:37.647 [2024-07-15 22:16:51.099265] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:37.647 [2024-07-15 22:16:51.099378] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61199 ] 00:05:37.647 [2024-07-15 22:16:51.244254] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.905 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:37.905 [2024-07-15 22:16:51.404858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.278 ====================================== 00:05:39.278 busy:2492602754 (cyc) 00:05:39.278 total_run_count: 5229000 00:05:39.278 tsc_hz: 2490000000 (cyc) 00:05:39.278 ====================================== 00:05:39.278 poller_cost: 476 (cyc), 191 (nsec) 00:05:39.278 ************************************ 00:05:39.278 END TEST thread_poller_perf 00:05:39.278 ************************************ 00:05:39.278 00:05:39.278 real 0m1.453s 00:05:39.278 user 0m1.261s 00:05:39.278 sys 0m0.083s 00:05:39.278 22:16:52 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.278 22:16:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:39.278 22:16:52 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:39.278 22:16:52 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:39.278 ************************************ 00:05:39.278 END TEST thread 00:05:39.278 ************************************ 00:05:39.278 00:05:39.278 real 0m3.188s 00:05:39.278 user 0m2.626s 00:05:39.278 sys 0m0.343s 00:05:39.278 22:16:52 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.278 22:16:52 thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.278 22:16:52 -- common/autotest_common.sh@1142 -- # return 0 00:05:39.278 22:16:52 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:39.278 22:16:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.278 22:16:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.278 22:16:52 -- common/autotest_common.sh@10 -- # set +x 00:05:39.278 ************************************ 00:05:39.278 START TEST accel 00:05:39.278 ************************************ 00:05:39.278 22:16:52 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:39.278 * Looking for test storage... 00:05:39.278 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:39.278 22:16:52 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:39.278 22:16:52 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:39.278 22:16:52 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:39.278 22:16:52 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=61269 00:05:39.278 22:16:52 accel -- accel/accel.sh@63 -- # waitforlisten 61269 00:05:39.278 22:16:52 accel -- common/autotest_common.sh@829 -- # '[' -z 61269 ']' 00:05:39.278 22:16:52 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:39.278 22:16:52 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:39.278 22:16:52 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.278 22:16:52 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.278 22:16:52 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.278 22:16:52 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.278 22:16:52 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.278 22:16:52 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.278 22:16:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.278 22:16:52 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.278 22:16:52 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.278 22:16:52 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.278 22:16:52 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:39.278 22:16:52 accel -- accel/accel.sh@41 -- # jq -r . 00:05:39.278 [2024-07-15 22:16:52.845932] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:39.278 [2024-07-15 22:16:52.846016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61269 ] 00:05:39.536 [2024-07-15 22:16:52.990435] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.536 [2024-07-15 22:16:53.139352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.795 [2024-07-15 22:16:53.212767] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:40.359 22:16:53 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.359 22:16:53 accel -- common/autotest_common.sh@862 -- # return 0 00:05:40.359 22:16:53 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:40.359 22:16:53 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:40.359 22:16:53 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:40.359 22:16:53 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:40.359 22:16:53 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:40.359 22:16:53 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:40.359 22:16:53 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.359 22:16:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.359 22:16:53 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:40.359 22:16:53 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.359 22:16:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.359 22:16:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.359 22:16:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.359 22:16:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.359 22:16:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.359 22:16:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.359 22:16:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.359 22:16:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.359 22:16:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.359 22:16:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.359 22:16:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.359 22:16:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.359 22:16:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.359 22:16:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.359 22:16:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.359 22:16:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.359 22:16:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.359 22:16:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.359 22:16:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.359 22:16:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.359 22:16:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.359 22:16:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.359 22:16:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.359 22:16:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.359 22:16:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.359 22:16:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.359 22:16:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.359 22:16:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.359 22:16:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.359 22:16:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.359 22:16:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.359 22:16:53 accel -- accel/accel.sh@75 -- # killprocess 61269 00:05:40.359 22:16:53 accel -- common/autotest_common.sh@948 -- # '[' -z 61269 ']' 00:05:40.359 22:16:53 accel -- common/autotest_common.sh@952 -- # kill -0 61269 00:05:40.359 22:16:53 accel -- common/autotest_common.sh@953 -- # uname 00:05:40.359 22:16:53 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:40.359 22:16:53 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61269 00:05:40.359 killing process with pid 61269 00:05:40.359 22:16:53 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:40.359 22:16:53 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:40.359 22:16:53 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61269' 00:05:40.359 22:16:53 accel -- common/autotest_common.sh@967 -- # kill 61269 00:05:40.359 22:16:53 accel -- common/autotest_common.sh@972 -- # wait 61269 00:05:40.926 22:16:54 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:40.926 22:16:54 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:40.926 22:16:54 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:40.926 22:16:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.926 22:16:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.926 22:16:54 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:40.926 22:16:54 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:40.926 22:16:54 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:40.926 22:16:54 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.926 22:16:54 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.926 22:16:54 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.926 22:16:54 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.926 22:16:54 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.926 22:16:54 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:40.926 22:16:54 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:40.926 22:16:54 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.926 22:16:54 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:40.926 22:16:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:40.926 22:16:54 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:40.926 22:16:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:40.926 22:16:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.926 22:16:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.926 ************************************ 00:05:40.926 START TEST accel_missing_filename 00:05:40.926 ************************************ 00:05:40.926 22:16:54 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:40.926 22:16:54 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:40.926 22:16:54 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:40.926 22:16:54 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:40.926 22:16:54 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.926 22:16:54 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:40.926 22:16:54 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.926 22:16:54 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:40.926 22:16:54 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:40.926 22:16:54 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:40.926 22:16:54 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.926 22:16:54 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.926 22:16:54 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.926 22:16:54 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.926 22:16:54 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.926 22:16:54 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:40.926 22:16:54 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:41.184 [2024-07-15 22:16:54.565273] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:41.184 [2024-07-15 22:16:54.565370] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61326 ] 00:05:41.184 [2024-07-15 22:16:54.710054] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.442 [2024-07-15 22:16:54.860703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.442 [2024-07-15 22:16:54.934230] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:41.442 [2024-07-15 22:16:55.040518] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:41.701 A filename is required. 00:05:41.701 22:16:55 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:41.701 22:16:55 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:41.701 22:16:55 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:41.701 22:16:55 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:41.701 22:16:55 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:41.701 22:16:55 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:41.701 00:05:41.701 real 0m0.633s 00:05:41.701 user 0m0.413s 00:05:41.701 sys 0m0.156s 00:05:41.701 ************************************ 00:05:41.701 END TEST accel_missing_filename 00:05:41.701 ************************************ 00:05:41.701 22:16:55 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.701 22:16:55 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:41.701 22:16:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:41.701 22:16:55 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:41.701 22:16:55 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:41.701 22:16:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.701 22:16:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.701 ************************************ 00:05:41.701 START TEST accel_compress_verify 00:05:41.701 ************************************ 00:05:41.701 22:16:55 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:41.701 22:16:55 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:41.701 22:16:55 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:41.701 22:16:55 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:41.701 22:16:55 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.701 22:16:55 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:41.701 22:16:55 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.701 22:16:55 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:41.701 22:16:55 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:41.701 22:16:55 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:41.701 22:16:55 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.701 22:16:55 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.701 22:16:55 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.701 22:16:55 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.701 22:16:55 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.701 22:16:55 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:41.701 22:16:55 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:41.701 [2024-07-15 22:16:55.266117] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:41.701 [2024-07-15 22:16:55.266212] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61351 ] 00:05:41.960 [2024-07-15 22:16:55.410430] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.960 [2024-07-15 22:16:55.561193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.218 [2024-07-15 22:16:55.635175] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:42.218 [2024-07-15 22:16:55.741167] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:42.484 00:05:42.484 Compression does not support the verify option, aborting. 00:05:42.484 ************************************ 00:05:42.484 END TEST accel_compress_verify 00:05:42.484 ************************************ 00:05:42.484 22:16:55 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:42.484 22:16:55 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:42.484 22:16:55 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:42.484 22:16:55 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:42.485 22:16:55 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:42.485 22:16:55 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:42.485 00:05:42.485 real 0m0.636s 00:05:42.485 user 0m0.418s 00:05:42.485 sys 0m0.155s 00:05:42.485 22:16:55 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.485 22:16:55 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:42.485 22:16:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:42.485 22:16:55 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:42.485 22:16:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:42.485 22:16:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.485 22:16:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.485 ************************************ 00:05:42.485 START TEST accel_wrong_workload 00:05:42.485 ************************************ 00:05:42.485 22:16:55 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:42.485 22:16:55 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:42.485 22:16:55 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:42.485 22:16:55 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:42.485 22:16:55 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.485 22:16:55 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:42.485 22:16:55 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.485 22:16:55 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:42.485 22:16:55 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:42.485 22:16:55 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:42.485 22:16:55 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.485 22:16:55 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.485 22:16:55 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.485 22:16:55 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.485 22:16:55 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.485 22:16:55 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:42.485 22:16:55 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:42.485 Unsupported workload type: foobar 00:05:42.485 [2024-07-15 22:16:55.975266] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:42.485 accel_perf options: 00:05:42.485 [-h help message] 00:05:42.485 [-q queue depth per core] 00:05:42.485 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:42.485 [-T number of threads per core 00:05:42.485 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:42.485 [-t time in seconds] 00:05:42.485 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:42.485 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:42.485 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:42.485 [-l for compress/decompress workloads, name of uncompressed input file 00:05:42.485 [-S for crc32c workload, use this seed value (default 0) 00:05:42.485 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:42.485 [-f for fill workload, use this BYTE value (default 255) 00:05:42.485 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:42.485 [-y verify result if this switch is on] 00:05:42.485 [-a tasks to allocate per core (default: same value as -q)] 00:05:42.485 Can be used to spread operations across a wider range of memory. 00:05:42.485 22:16:55 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:42.485 22:16:55 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:42.485 22:16:55 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:42.485 22:16:55 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:42.485 00:05:42.485 real 0m0.047s 00:05:42.485 user 0m0.026s 00:05:42.485 sys 0m0.021s 00:05:42.485 22:16:55 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.485 22:16:55 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:42.485 ************************************ 00:05:42.485 END TEST accel_wrong_workload 00:05:42.485 ************************************ 00:05:42.485 22:16:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:42.485 22:16:56 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:42.485 22:16:56 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:42.485 22:16:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.485 22:16:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.485 ************************************ 00:05:42.485 START TEST accel_negative_buffers 00:05:42.485 ************************************ 00:05:42.485 22:16:56 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:42.485 22:16:56 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:42.485 22:16:56 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:42.485 22:16:56 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:42.485 22:16:56 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.485 22:16:56 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:42.485 22:16:56 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.485 22:16:56 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:42.485 22:16:56 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:42.485 22:16:56 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:42.485 22:16:56 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.485 22:16:56 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.485 22:16:56 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.485 22:16:56 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.485 22:16:56 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.485 22:16:56 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:42.485 22:16:56 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:42.485 -x option must be non-negative. 00:05:42.485 [2024-07-15 22:16:56.086841] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:42.485 accel_perf options: 00:05:42.485 [-h help message] 00:05:42.485 [-q queue depth per core] 00:05:42.485 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:42.485 [-T number of threads per core 00:05:42.485 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:42.485 [-t time in seconds] 00:05:42.485 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:42.485 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:42.485 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:42.485 [-l for compress/decompress workloads, name of uncompressed input file 00:05:42.485 [-S for crc32c workload, use this seed value (default 0) 00:05:42.485 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:42.485 [-f for fill workload, use this BYTE value (default 255) 00:05:42.485 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:42.485 [-y verify result if this switch is on] 00:05:42.485 [-a tasks to allocate per core (default: same value as -q)] 00:05:42.485 Can be used to spread operations across a wider range of memory. 00:05:42.485 ************************************ 00:05:42.485 END TEST accel_negative_buffers 00:05:42.485 ************************************ 00:05:42.485 22:16:56 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:42.485 22:16:56 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:42.485 22:16:56 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:42.485 22:16:56 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:42.485 00:05:42.485 real 0m0.041s 00:05:42.485 user 0m0.020s 00:05:42.485 sys 0m0.020s 00:05:42.485 22:16:56 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.485 22:16:56 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:42.743 22:16:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:42.743 22:16:56 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:42.743 22:16:56 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:42.743 22:16:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.743 22:16:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.743 ************************************ 00:05:42.743 START TEST accel_crc32c 00:05:42.743 ************************************ 00:05:42.743 22:16:56 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:42.743 22:16:56 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:42.743 22:16:56 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:42.743 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.744 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.744 22:16:56 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:42.744 22:16:56 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:42.744 22:16:56 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:42.744 22:16:56 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.744 22:16:56 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.744 22:16:56 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.744 22:16:56 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.744 22:16:56 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.744 22:16:56 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:42.744 22:16:56 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:42.744 [2024-07-15 22:16:56.198530] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:42.744 [2024-07-15 22:16:56.198658] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61415 ] 00:05:42.744 [2024-07-15 22:16:56.343361] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.001 [2024-07-15 22:16:56.501363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.001 22:16:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:44.377 22:16:57 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.377 00:05:44.377 real 0m1.643s 00:05:44.377 user 0m1.401s 00:05:44.377 sys 0m0.155s 00:05:44.377 22:16:57 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.377 22:16:57 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:44.377 ************************************ 00:05:44.377 END TEST accel_crc32c 00:05:44.377 ************************************ 00:05:44.377 22:16:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:44.377 22:16:57 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:44.377 22:16:57 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:44.377 22:16:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.377 22:16:57 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.377 ************************************ 00:05:44.377 START TEST accel_crc32c_C2 00:05:44.377 ************************************ 00:05:44.377 22:16:57 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:44.377 22:16:57 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:44.377 22:16:57 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:44.377 22:16:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.377 22:16:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.377 22:16:57 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:44.377 22:16:57 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:44.377 22:16:57 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:44.377 22:16:57 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.377 22:16:57 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.377 22:16:57 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.377 22:16:57 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.377 22:16:57 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.377 22:16:57 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:44.377 22:16:57 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:44.377 [2024-07-15 22:16:57.916939] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:44.377 [2024-07-15 22:16:57.917257] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61449 ] 00:05:44.637 [2024-07-15 22:16:58.063220] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.637 [2024-07-15 22:16:58.213845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:44.896 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.897 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.897 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.897 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.897 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.897 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.897 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.897 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.897 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.897 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.897 22:16:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.278 ************************************ 00:05:46.278 END TEST accel_crc32c_C2 00:05:46.278 ************************************ 00:05:46.278 00:05:46.278 real 0m1.652s 00:05:46.278 user 0m1.406s 00:05:46.278 sys 0m0.152s 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.278 22:16:59 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:46.278 22:16:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:46.278 22:16:59 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:46.278 22:16:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:46.278 22:16:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.278 22:16:59 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.278 ************************************ 00:05:46.278 START TEST accel_copy 00:05:46.278 ************************************ 00:05:46.278 22:16:59 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:46.278 22:16:59 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:46.278 22:16:59 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:46.278 22:16:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.278 22:16:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.278 22:16:59 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:46.278 22:16:59 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:46.278 22:16:59 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:46.278 22:16:59 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.278 22:16:59 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.278 22:16:59 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.278 22:16:59 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.278 22:16:59 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.278 22:16:59 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:46.278 22:16:59 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:46.278 [2024-07-15 22:16:59.640642] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:46.278 [2024-07-15 22:16:59.640738] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61489 ] 00:05:46.278 [2024-07-15 22:16:59.784676] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.537 [2024-07-15 22:16:59.943590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.537 22:17:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:47.915 22:17:01 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.915 00:05:47.915 real 0m1.651s 00:05:47.915 user 0m1.396s 00:05:47.915 sys 0m0.163s 00:05:47.915 ************************************ 00:05:47.915 END TEST accel_copy 00:05:47.915 ************************************ 00:05:47.915 22:17:01 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.915 22:17:01 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:47.915 22:17:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:47.915 22:17:01 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:47.915 22:17:01 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:47.915 22:17:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.915 22:17:01 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.915 ************************************ 00:05:47.915 START TEST accel_fill 00:05:47.915 ************************************ 00:05:47.915 22:17:01 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:47.915 22:17:01 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:47.915 22:17:01 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:47.915 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.915 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.915 22:17:01 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:47.915 22:17:01 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:47.915 22:17:01 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:47.915 22:17:01 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.915 22:17:01 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.915 22:17:01 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.915 22:17:01 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.915 22:17:01 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.915 22:17:01 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:47.915 22:17:01 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:47.915 [2024-07-15 22:17:01.363774] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:47.915 [2024-07-15 22:17:01.363873] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61525 ] 00:05:47.915 [2024-07-15 22:17:01.510202] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.174 [2024-07-15 22:17:01.662065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.174 22:17:01 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.175 22:17:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:49.606 22:17:02 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.606 00:05:49.606 real 0m1.646s 00:05:49.606 user 0m1.403s 00:05:49.606 sys 0m0.154s 00:05:49.606 22:17:02 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.606 22:17:02 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:49.606 ************************************ 00:05:49.606 END TEST accel_fill 00:05:49.606 ************************************ 00:05:49.606 22:17:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:49.606 22:17:03 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:49.606 22:17:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:49.606 22:17:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.606 22:17:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.606 ************************************ 00:05:49.606 START TEST accel_copy_crc32c 00:05:49.606 ************************************ 00:05:49.606 22:17:03 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:49.606 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:49.606 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:49.606 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.606 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:49.606 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.606 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:49.606 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:49.606 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.606 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.606 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.606 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.606 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.606 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:49.606 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:49.606 [2024-07-15 22:17:03.081379] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:49.606 [2024-07-15 22:17:03.081762] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61559 ] 00:05:49.606 [2024-07-15 22:17:03.223922] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.866 [2024-07-15 22:17:03.381447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.866 22:17:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.242 00:05:51.242 real 0m1.650s 00:05:51.242 user 0m1.414s 00:05:51.242 sys 0m0.147s 00:05:51.242 ************************************ 00:05:51.242 END TEST accel_copy_crc32c 00:05:51.242 ************************************ 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.242 22:17:04 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:51.242 22:17:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.242 22:17:04 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:51.242 22:17:04 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:51.242 22:17:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.242 22:17:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.242 ************************************ 00:05:51.242 START TEST accel_copy_crc32c_C2 00:05:51.242 ************************************ 00:05:51.242 22:17:04 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:51.242 22:17:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:51.242 22:17:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:51.242 22:17:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.242 22:17:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.242 22:17:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:51.242 22:17:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:51.242 22:17:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:51.242 22:17:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.242 22:17:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.242 22:17:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.242 22:17:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.242 22:17:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.242 22:17:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:51.242 22:17:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:51.242 [2024-07-15 22:17:04.804155] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:51.242 [2024-07-15 22:17:04.804243] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61594 ] 00:05:51.501 [2024-07-15 22:17:04.950112] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.501 [2024-07-15 22:17:05.102892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.760 22:17:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.136 ************************************ 00:05:53.136 END TEST accel_copy_crc32c_C2 00:05:53.136 ************************************ 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.136 00:05:53.136 real 0m1.655s 00:05:53.136 user 0m1.410s 00:05:53.136 sys 0m0.157s 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.136 22:17:06 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:53.136 22:17:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.136 22:17:06 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:53.136 22:17:06 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:53.136 22:17:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.136 22:17:06 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.136 ************************************ 00:05:53.136 START TEST accel_dualcast 00:05:53.136 ************************************ 00:05:53.136 22:17:06 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:53.136 22:17:06 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:53.136 22:17:06 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:53.136 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.136 22:17:06 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:53.136 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.136 22:17:06 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:53.136 22:17:06 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:53.136 22:17:06 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.136 22:17:06 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.136 22:17:06 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.136 22:17:06 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.136 22:17:06 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.136 22:17:06 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:53.136 22:17:06 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:53.136 [2024-07-15 22:17:06.532461] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:53.136 [2024-07-15 22:17:06.532790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61637 ] 00:05:53.136 [2024-07-15 22:17:06.680447] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.395 [2024-07-15 22:17:06.828569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.395 22:17:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.772 22:17:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:54.772 22:17:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.772 22:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.772 22:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.772 22:17:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:54.772 22:17:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.772 22:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.772 22:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.772 22:17:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:54.772 22:17:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.772 22:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.772 22:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.772 22:17:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:54.772 22:17:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.772 22:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.772 22:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.772 22:17:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:54.772 22:17:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.772 22:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.772 22:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.772 22:17:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:54.772 22:17:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.772 22:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.772 22:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.773 22:17:08 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.773 22:17:08 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:54.773 22:17:08 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.773 00:05:54.773 real 0m1.650s 00:05:54.773 user 0m1.401s 00:05:54.773 sys 0m0.157s 00:05:54.773 22:17:08 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.773 ************************************ 00:05:54.773 END TEST accel_dualcast 00:05:54.773 ************************************ 00:05:54.773 22:17:08 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:54.773 22:17:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:54.773 22:17:08 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:54.773 22:17:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:54.773 22:17:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.773 22:17:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.773 ************************************ 00:05:54.773 START TEST accel_compare 00:05:54.773 ************************************ 00:05:54.773 22:17:08 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:54.773 22:17:08 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:54.773 22:17:08 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:54.773 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:54.773 22:17:08 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:54.773 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:54.773 22:17:08 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:54.773 22:17:08 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:54.773 22:17:08 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.773 22:17:08 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.773 22:17:08 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.773 22:17:08 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.773 22:17:08 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.773 22:17:08 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:54.773 22:17:08 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:54.773 [2024-07-15 22:17:08.253305] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:54.773 [2024-07-15 22:17:08.253398] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61671 ] 00:05:54.773 [2024-07-15 22:17:08.397547] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.032 [2024-07-15 22:17:08.498551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.032 22:17:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:56.408 ************************************ 00:05:56.408 END TEST accel_compare 00:05:56.408 ************************************ 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:56.408 22:17:09 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.408 00:05:56.408 real 0m1.464s 00:05:56.408 user 0m1.273s 00:05:56.408 sys 0m0.100s 00:05:56.408 22:17:09 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.408 22:17:09 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:56.408 22:17:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:56.408 22:17:09 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:56.408 22:17:09 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:56.408 22:17:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.408 22:17:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.408 ************************************ 00:05:56.408 START TEST accel_xor 00:05:56.408 ************************************ 00:05:56.408 22:17:09 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:56.408 22:17:09 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:56.409 22:17:09 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:56.409 22:17:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.409 22:17:09 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:56.409 22:17:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.409 22:17:09 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:56.409 22:17:09 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:56.409 22:17:09 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.409 22:17:09 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.409 22:17:09 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.409 22:17:09 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.409 22:17:09 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.409 22:17:09 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:56.409 22:17:09 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:56.409 [2024-07-15 22:17:09.783566] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:56.409 [2024-07-15 22:17:09.783684] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61706 ] 00:05:56.409 [2024-07-15 22:17:09.926396] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.409 [2024-07-15 22:17:10.025781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.667 22:17:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.041 22:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.041 22:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.042 ************************************ 00:05:58.042 END TEST accel_xor 00:05:58.042 ************************************ 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.042 00:05:58.042 real 0m1.554s 00:05:58.042 user 0m0.018s 00:05:58.042 sys 0m0.003s 00:05:58.042 22:17:11 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.042 22:17:11 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:58.042 22:17:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.042 22:17:11 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:58.042 22:17:11 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:58.042 22:17:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.042 22:17:11 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.042 ************************************ 00:05:58.042 START TEST accel_xor 00:05:58.042 ************************************ 00:05:58.042 22:17:11 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:58.042 22:17:11 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:58.042 [2024-07-15 22:17:11.409517] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:58.042 [2024-07-15 22:17:11.409884] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61740 ] 00:05:58.042 [2024-07-15 22:17:11.552931] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.042 [2024-07-15 22:17:11.661261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.301 22:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.235 22:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:59.235 22:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.235 22:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.235 22:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.235 22:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:59.235 22:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.235 22:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.235 22:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.235 22:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:59.235 22:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.235 22:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.235 22:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.236 22:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:59.236 22:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.236 22:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.236 22:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.236 22:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:59.236 22:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.236 22:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.236 22:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.236 22:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:59.236 22:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.236 22:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.236 22:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.236 22:17:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.236 22:17:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:59.236 22:17:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.236 00:05:59.236 real 0m1.473s 00:05:59.236 user 0m1.268s 00:05:59.236 sys 0m0.112s 00:05:59.236 22:17:12 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.236 ************************************ 00:05:59.236 END TEST accel_xor 00:05:59.236 ************************************ 00:05:59.236 22:17:12 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:59.494 22:17:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:59.494 22:17:12 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:59.494 22:17:12 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:59.494 22:17:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.494 22:17:12 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.494 ************************************ 00:05:59.494 START TEST accel_dif_verify 00:05:59.494 ************************************ 00:05:59.494 22:17:12 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:59.494 22:17:12 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:59.494 22:17:12 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:59.494 22:17:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.494 22:17:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.494 22:17:12 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:59.494 22:17:12 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:59.494 22:17:12 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:59.494 22:17:12 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.494 22:17:12 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.494 22:17:12 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.494 22:17:12 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.494 22:17:12 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.494 22:17:12 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:59.494 22:17:12 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:59.494 [2024-07-15 22:17:12.950434] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:59.494 [2024-07-15 22:17:12.950530] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61775 ] 00:05:59.494 [2024-07-15 22:17:13.092525] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.754 [2024-07-15 22:17:13.190993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.754 22:17:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:01.147 22:17:14 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.147 00:06:01.147 real 0m1.461s 00:06:01.147 user 0m1.266s 00:06:01.147 sys 0m0.110s 00:06:01.147 22:17:14 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.147 ************************************ 00:06:01.147 END TEST accel_dif_verify 00:06:01.147 ************************************ 00:06:01.147 22:17:14 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:01.147 22:17:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:01.147 22:17:14 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:01.147 22:17:14 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:01.147 22:17:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.147 22:17:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.147 ************************************ 00:06:01.147 START TEST accel_dif_generate 00:06:01.147 ************************************ 00:06:01.147 22:17:14 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:01.147 22:17:14 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:01.147 22:17:14 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:01.147 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.147 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.147 22:17:14 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:01.147 22:17:14 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:01.147 22:17:14 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:01.147 22:17:14 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.147 22:17:14 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.147 22:17:14 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.147 22:17:14 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.147 22:17:14 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.147 22:17:14 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:01.147 22:17:14 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:01.147 [2024-07-15 22:17:14.482788] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:01.147 [2024-07-15 22:17:14.482880] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61809 ] 00:06:01.147 [2024-07-15 22:17:14.627159] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.406 [2024-07-15 22:17:14.782275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.406 22:17:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:01.407 22:17:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.407 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.407 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.407 22:17:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:01.407 22:17:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.407 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.407 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.407 22:17:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.407 22:17:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.407 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.407 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.407 22:17:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:01.407 22:17:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.407 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.407 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.407 22:17:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:01.407 22:17:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.407 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.407 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.407 22:17:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:01.407 22:17:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.407 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.407 22:17:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:02.785 ************************************ 00:06:02.785 END TEST accel_dif_generate 00:06:02.785 ************************************ 00:06:02.785 22:17:16 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.785 00:06:02.785 real 0m1.650s 00:06:02.785 user 0m1.393s 00:06:02.785 sys 0m0.168s 00:06:02.785 22:17:16 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.785 22:17:16 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:02.785 22:17:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.785 22:17:16 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:02.785 22:17:16 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:02.785 22:17:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.785 22:17:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.785 ************************************ 00:06:02.785 START TEST accel_dif_generate_copy 00:06:02.785 ************************************ 00:06:02.785 22:17:16 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:02.785 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:02.785 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:02.785 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.785 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.785 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:02.785 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:02.785 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:02.785 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.785 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.785 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.785 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.785 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.785 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:02.785 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:02.785 [2024-07-15 22:17:16.197364] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:02.785 [2024-07-15 22:17:16.197463] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61845 ] 00:06:02.785 [2024-07-15 22:17:16.343140] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.043 [2024-07-15 22:17:16.494220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.043 22:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.414 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:04.414 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.414 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.414 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.414 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:04.414 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.415 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.415 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.415 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:04.415 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.415 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.415 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.415 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:04.415 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.415 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.415 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.415 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:04.415 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.415 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.415 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.415 ************************************ 00:06:04.415 END TEST accel_dif_generate_copy 00:06:04.415 ************************************ 00:06:04.415 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:04.415 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.415 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.415 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.415 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.415 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:04.415 22:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.415 00:06:04.415 real 0m1.650s 00:06:04.415 user 0m1.392s 00:06:04.415 sys 0m0.168s 00:06:04.415 22:17:17 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.415 22:17:17 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:04.415 22:17:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.415 22:17:17 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:04.415 22:17:17 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:04.415 22:17:17 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:04.415 22:17:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.415 22:17:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.415 ************************************ 00:06:04.415 START TEST accel_comp 00:06:04.415 ************************************ 00:06:04.415 22:17:17 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:04.415 22:17:17 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:04.415 22:17:17 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:04.415 22:17:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.415 22:17:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.415 22:17:17 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:04.415 22:17:17 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:04.415 22:17:17 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:04.415 22:17:17 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.415 22:17:17 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.415 22:17:17 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.415 22:17:17 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.415 22:17:17 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.415 22:17:17 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:04.415 22:17:17 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:04.415 [2024-07-15 22:17:17.922419] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:04.415 [2024-07-15 22:17:17.922516] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61885 ] 00:06:04.672 [2024-07-15 22:17:18.065395] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.672 [2024-07-15 22:17:18.228101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:04.978 22:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.979 22:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.913 22:17:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:05.913 22:17:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.913 22:17:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.913 22:17:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.913 22:17:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:05.913 22:17:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.913 22:17:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.913 22:17:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.913 22:17:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:05.913 22:17:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.913 22:17:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.913 22:17:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.171 22:17:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:06.171 22:17:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.171 22:17:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.171 22:17:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.171 22:17:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:06.171 22:17:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.171 22:17:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.171 22:17:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.171 22:17:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:06.171 22:17:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.171 22:17:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.171 ************************************ 00:06:06.171 END TEST accel_comp 00:06:06.171 ************************************ 00:06:06.171 22:17:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.171 22:17:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.171 22:17:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:06.171 22:17:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.171 00:06:06.171 real 0m1.661s 00:06:06.171 user 0m1.402s 00:06:06.171 sys 0m0.160s 00:06:06.171 22:17:19 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.171 22:17:19 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:06.171 22:17:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.171 22:17:19 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:06.171 22:17:19 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:06.171 22:17:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.171 22:17:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.171 ************************************ 00:06:06.171 START TEST accel_decomp 00:06:06.171 ************************************ 00:06:06.171 22:17:19 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:06.171 22:17:19 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:06.171 22:17:19 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:06.171 22:17:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.171 22:17:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.171 22:17:19 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:06.171 22:17:19 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:06.171 22:17:19 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:06.171 22:17:19 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.171 22:17:19 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.171 22:17:19 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.171 22:17:19 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.171 22:17:19 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.171 22:17:19 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:06.171 22:17:19 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:06.171 [2024-07-15 22:17:19.664975] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:06.171 [2024-07-15 22:17:19.665819] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61916 ] 00:06:06.429 [2024-07-15 22:17:19.827269] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.429 [2024-07-15 22:17:19.980203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.429 22:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.687 22:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:08.063 ************************************ 00:06:08.063 END TEST accel_decomp 00:06:08.063 ************************************ 00:06:08.063 22:17:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.063 00:06:08.063 real 0m1.682s 00:06:08.063 user 0m1.412s 00:06:08.063 sys 0m0.170s 00:06:08.063 22:17:21 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.063 22:17:21 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:08.063 22:17:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.063 22:17:21 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:08.063 22:17:21 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:08.063 22:17:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.063 22:17:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.063 ************************************ 00:06:08.063 START TEST accel_decomp_full 00:06:08.063 ************************************ 00:06:08.064 22:17:21 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:08.064 22:17:21 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:08.064 22:17:21 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:08.064 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.064 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.064 22:17:21 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:08.064 22:17:21 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:08.064 22:17:21 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:08.064 22:17:21 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.064 22:17:21 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.064 22:17:21 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.064 22:17:21 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.064 22:17:21 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.064 22:17:21 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:08.064 22:17:21 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:08.064 [2024-07-15 22:17:21.419973] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:08.064 [2024-07-15 22:17:21.420063] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61956 ] 00:06:08.064 [2024-07-15 22:17:21.566422] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.323 [2024-07-15 22:17:21.718716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.323 22:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:09.703 22:17:23 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.703 00:06:09.703 real 0m1.668s 00:06:09.703 user 0m1.426s 00:06:09.703 sys 0m0.153s 00:06:09.703 22:17:23 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.703 22:17:23 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:09.703 ************************************ 00:06:09.703 END TEST accel_decomp_full 00:06:09.703 ************************************ 00:06:09.703 22:17:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:09.703 22:17:23 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:09.703 22:17:23 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:09.703 22:17:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.703 22:17:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.703 ************************************ 00:06:09.703 START TEST accel_decomp_mcore 00:06:09.703 ************************************ 00:06:09.703 22:17:23 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:09.703 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:09.703 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:09.703 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.703 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.703 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:09.703 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:09.703 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:09.703 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.703 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.703 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.703 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.703 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.703 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:09.703 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:09.703 [2024-07-15 22:17:23.157964] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:09.703 [2024-07-15 22:17:23.158297] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61996 ] 00:06:09.703 [2024-07-15 22:17:23.299331] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:09.962 [2024-07-15 22:17:23.464565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.962 [2024-07-15 22:17:23.464763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.962 [2024-07-15 22:17:23.464956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.962 [2024-07-15 22:17:23.464959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.962 22:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.339 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.340 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.340 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.340 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.340 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.340 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.340 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.340 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:11.340 ************************************ 00:06:11.340 END TEST accel_decomp_mcore 00:06:11.340 ************************************ 00:06:11.340 22:17:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.340 00:06:11.340 real 0m1.680s 00:06:11.340 user 0m4.986s 00:06:11.340 sys 0m0.177s 00:06:11.340 22:17:24 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.340 22:17:24 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:11.340 22:17:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:11.340 22:17:24 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:11.340 22:17:24 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:11.340 22:17:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.340 22:17:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.340 ************************************ 00:06:11.340 START TEST accel_decomp_full_mcore 00:06:11.340 ************************************ 00:06:11.340 22:17:24 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:11.340 22:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:11.340 22:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:11.340 22:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.340 22:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:11.340 22:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.340 22:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:11.340 22:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:11.340 22:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.340 22:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.340 22:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.340 22:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.340 22:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.340 22:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:11.340 22:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:11.340 [2024-07-15 22:17:24.911360] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:11.340 [2024-07-15 22:17:24.911454] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62034 ] 00:06:11.599 [2024-07-15 22:17:25.050827] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:11.599 [2024-07-15 22:17:25.208143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.599 [2024-07-15 22:17:25.208347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.599 [2024-07-15 22:17:25.208538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.599 [2024-07-15 22:17:25.208613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.858 22:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.236 ************************************ 00:06:13.236 END TEST accel_decomp_full_mcore 00:06:13.236 ************************************ 00:06:13.236 00:06:13.236 real 0m1.673s 00:06:13.236 user 0m4.979s 00:06:13.236 sys 0m0.179s 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.236 22:17:26 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:13.236 22:17:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:13.236 22:17:26 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:13.236 22:17:26 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:13.236 22:17:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.236 22:17:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.236 ************************************ 00:06:13.236 START TEST accel_decomp_mthread 00:06:13.237 ************************************ 00:06:13.237 22:17:26 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:13.237 22:17:26 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:13.237 22:17:26 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:13.237 22:17:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.237 22:17:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.237 22:17:26 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:13.237 22:17:26 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:13.237 22:17:26 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:13.237 22:17:26 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.237 22:17:26 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.237 22:17:26 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.237 22:17:26 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.237 22:17:26 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.237 22:17:26 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:13.237 22:17:26 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:13.237 [2024-07-15 22:17:26.655886] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:13.237 [2024-07-15 22:17:26.656202] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62071 ] 00:06:13.237 [2024-07-15 22:17:26.802399] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.495 [2024-07-15 22:17:26.953626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.495 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:13.495 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.495 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.495 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.495 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.496 22:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.874 ************************************ 00:06:14.874 END TEST accel_decomp_mthread 00:06:14.874 ************************************ 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.874 00:06:14.874 real 0m1.662s 00:06:14.874 user 0m1.416s 00:06:14.874 sys 0m0.161s 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.874 22:17:28 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:14.874 22:17:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.875 22:17:28 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:14.875 22:17:28 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:14.875 22:17:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.875 22:17:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.875 ************************************ 00:06:14.875 START TEST accel_decomp_full_mthread 00:06:14.875 ************************************ 00:06:14.875 22:17:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:14.875 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:14.875 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:14.875 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.875 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.875 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:14.875 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:14.875 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:14.875 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.875 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.875 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.875 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.875 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.875 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:14.875 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:14.875 [2024-07-15 22:17:28.394155] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:14.875 [2024-07-15 22:17:28.394489] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62111 ] 00:06:15.134 [2024-07-15 22:17:28.541252] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.134 [2024-07-15 22:17:28.694979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.393 22:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.772 00:06:16.772 real 0m1.684s 00:06:16.772 user 0m1.436s 00:06:16.772 sys 0m0.160s 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.772 ************************************ 00:06:16.772 END TEST accel_decomp_full_mthread 00:06:16.772 ************************************ 00:06:16.772 22:17:30 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:16.773 22:17:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.773 22:17:30 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:16.773 22:17:30 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:16.773 22:17:30 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:16.773 22:17:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.773 22:17:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.773 22:17:30 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:16.773 22:17:30 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.773 22:17:30 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.773 22:17:30 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.773 22:17:30 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.773 22:17:30 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.773 22:17:30 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:16.773 22:17:30 accel -- accel/accel.sh@41 -- # jq -r . 00:06:16.773 ************************************ 00:06:16.773 START TEST accel_dif_functional_tests 00:06:16.773 ************************************ 00:06:16.773 22:17:30 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:16.773 [2024-07-15 22:17:30.170815] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:16.773 [2024-07-15 22:17:30.170892] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62147 ] 00:06:16.773 [2024-07-15 22:17:30.313543] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:17.032 [2024-07-15 22:17:30.472277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.032 [2024-07-15 22:17:30.472481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.032 [2024-07-15 22:17:30.472475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.032 [2024-07-15 22:17:30.548439] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:17.032 00:06:17.032 00:06:17.032 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.032 http://cunit.sourceforge.net/ 00:06:17.032 00:06:17.032 00:06:17.032 Suite: accel_dif 00:06:17.032 Test: verify: DIF generated, GUARD check ...passed 00:06:17.032 Test: verify: DIF generated, APPTAG check ...passed 00:06:17.032 Test: verify: DIF generated, REFTAG check ...passed 00:06:17.032 Test: verify: DIF not generated, GUARD check ...passed 00:06:17.032 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 22:17:30.598078] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:17.032 [2024-07-15 22:17:30.598267] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:17.032 passed 00:06:17.032 Test: verify: DIF not generated, REFTAG check ...passed 00:06:17.032 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:17.032 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 22:17:30.598356] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:17.032 [2024-07-15 22:17:30.598487] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:17.032 passed 00:06:17.032 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:17.032 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:17.032 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:17.032 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:06:17.032 Test: verify copy: DIF generated, GUARD check ...passed 00:06:17.032 Test: verify copy: DIF generated, APPTAG check ...[2024-07-15 22:17:30.598709] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:17.032 passed 00:06:17.032 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:17.032 Test: verify copy: DIF not generated, GUARD check ...passed 00:06:17.032 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 22:17:30.598964] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:17.032 [2024-07-15 22:17:30.599041] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:17.032 passed 00:06:17.032 Test: verify copy: DIF not generated, REFTAG check ...passed 00:06:17.032 Test: generate copy: DIF generated, GUARD check ...passed 00:06:17.032 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:17.032 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:17.032 Test: generate copy: DIF generated, no GUARD check flag set ...[2024-07-15 22:17:30.599114] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:17.032 passed 00:06:17.032 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:17.032 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:17.032 Test: generate copy: iovecs-len validate ...passed 00:06:17.032 Test: generate copy: buffer alignment validate ...passed 00:06:17.032 00:06:17.032 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.032 suites 1 1 n/a 0 0 00:06:17.032 tests 26 26 26 0 0 00:06:17.032 asserts 115 115 115 0 n/a 00:06:17.032 00:06:17.032 Elapsed time = 0.003 seconds 00:06:17.032 [2024-07-15 22:17:30.599405] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:17.291 00:06:17.291 real 0m0.780s 00:06:17.291 user 0m1.070s 00:06:17.291 sys 0m0.218s 00:06:17.291 22:17:30 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.291 ************************************ 00:06:17.291 END TEST accel_dif_functional_tests 00:06:17.291 ************************************ 00:06:17.291 22:17:30 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:17.549 22:17:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.549 00:06:17.549 real 0m38.305s 00:06:17.549 user 0m39.174s 00:06:17.549 sys 0m5.329s 00:06:17.549 22:17:30 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.549 22:17:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.549 ************************************ 00:06:17.549 END TEST accel 00:06:17.549 ************************************ 00:06:17.549 22:17:31 -- common/autotest_common.sh@1142 -- # return 0 00:06:17.549 22:17:31 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:17.549 22:17:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.549 22:17:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.549 22:17:31 -- common/autotest_common.sh@10 -- # set +x 00:06:17.549 ************************************ 00:06:17.549 START TEST accel_rpc 00:06:17.549 ************************************ 00:06:17.549 22:17:31 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:17.549 * Looking for test storage... 00:06:17.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:17.549 22:17:31 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:17.549 22:17:31 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=62217 00:06:17.549 22:17:31 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:17.549 22:17:31 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 62217 00:06:17.549 22:17:31 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 62217 ']' 00:06:17.549 22:17:31 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.549 22:17:31 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.549 22:17:31 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.549 22:17:31 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.549 22:17:31 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.807 [2024-07-15 22:17:31.211010] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:17.807 [2024-07-15 22:17:31.211084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62217 ] 00:06:17.807 [2024-07-15 22:17:31.339392] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.066 [2024-07-15 22:17:31.490076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.633 22:17:32 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.633 22:17:32 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:18.633 22:17:32 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:18.633 22:17:32 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:18.633 22:17:32 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:18.633 22:17:32 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:18.633 22:17:32 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:18.633 22:17:32 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.633 22:17:32 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.633 22:17:32 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.633 ************************************ 00:06:18.633 START TEST accel_assign_opcode 00:06:18.633 ************************************ 00:06:18.634 22:17:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:18.634 22:17:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:18.634 22:17:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.634 22:17:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:18.634 [2024-07-15 22:17:32.085748] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:18.634 22:17:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.634 22:17:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:18.634 22:17:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.634 22:17:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:18.634 [2024-07-15 22:17:32.097721] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:18.634 22:17:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.634 22:17:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:18.634 22:17:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.634 22:17:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:18.634 [2024-07-15 22:17:32.194400] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:18.891 22:17:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.891 22:17:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:18.891 22:17:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.891 22:17:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:18.891 22:17:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:18.891 22:17:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:18.891 22:17:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.891 software 00:06:18.891 00:06:18.891 real 0m0.388s 00:06:18.891 user 0m0.055s 00:06:18.891 sys 0m0.011s 00:06:18.891 22:17:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.891 22:17:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:18.891 ************************************ 00:06:18.891 END TEST accel_assign_opcode 00:06:18.891 ************************************ 00:06:19.150 22:17:32 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:19.150 22:17:32 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 62217 00:06:19.150 22:17:32 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 62217 ']' 00:06:19.150 22:17:32 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 62217 00:06:19.150 22:17:32 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:19.150 22:17:32 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:19.150 22:17:32 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62217 00:06:19.150 22:17:32 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:19.150 killing process with pid 62217 00:06:19.150 22:17:32 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:19.150 22:17:32 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62217' 00:06:19.150 22:17:32 accel_rpc -- common/autotest_common.sh@967 -- # kill 62217 00:06:19.150 22:17:32 accel_rpc -- common/autotest_common.sh@972 -- # wait 62217 00:06:19.744 00:06:19.744 real 0m2.086s 00:06:19.744 user 0m1.943s 00:06:19.744 sys 0m0.600s 00:06:19.744 22:17:33 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.744 22:17:33 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.744 ************************************ 00:06:19.744 END TEST accel_rpc 00:06:19.744 ************************************ 00:06:19.744 22:17:33 -- common/autotest_common.sh@1142 -- # return 0 00:06:19.744 22:17:33 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:19.744 22:17:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.744 22:17:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.744 22:17:33 -- common/autotest_common.sh@10 -- # set +x 00:06:19.744 ************************************ 00:06:19.744 START TEST app_cmdline 00:06:19.744 ************************************ 00:06:19.744 22:17:33 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:19.744 * Looking for test storage... 00:06:19.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:19.744 22:17:33 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:19.744 22:17:33 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62310 00:06:19.744 22:17:33 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:19.744 22:17:33 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62310 00:06:19.744 22:17:33 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 62310 ']' 00:06:19.744 22:17:33 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.744 22:17:33 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.744 22:17:33 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.744 22:17:33 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.744 22:17:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:20.017 [2024-07-15 22:17:33.376246] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:20.017 [2024-07-15 22:17:33.376324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62310 ] 00:06:20.017 [2024-07-15 22:17:33.511859] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.276 [2024-07-15 22:17:33.663822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.276 [2024-07-15 22:17:33.737368] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:20.841 22:17:34 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.841 22:17:34 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:20.841 22:17:34 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:20.841 { 00:06:20.841 "version": "SPDK v24.09-pre git sha1 fcbf7f00f", 00:06:20.841 "fields": { 00:06:20.841 "major": 24, 00:06:20.841 "minor": 9, 00:06:20.841 "patch": 0, 00:06:20.841 "suffix": "-pre", 00:06:20.841 "commit": "fcbf7f00f" 00:06:20.841 } 00:06:20.841 } 00:06:20.841 22:17:34 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:20.841 22:17:34 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:20.841 22:17:34 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:20.841 22:17:34 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:20.841 22:17:34 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:20.841 22:17:34 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:20.841 22:17:34 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.841 22:17:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:20.841 22:17:34 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:20.841 22:17:34 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.098 22:17:34 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:21.098 22:17:34 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:21.098 22:17:34 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:21.098 22:17:34 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:21.098 22:17:34 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:21.098 22:17:34 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:21.098 22:17:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.098 22:17:34 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:21.098 22:17:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.098 22:17:34 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:21.098 22:17:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.098 22:17:34 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:21.098 22:17:34 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:21.098 22:17:34 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:21.098 request: 00:06:21.098 { 00:06:21.098 "method": "env_dpdk_get_mem_stats", 00:06:21.098 "req_id": 1 00:06:21.098 } 00:06:21.098 Got JSON-RPC error response 00:06:21.098 response: 00:06:21.098 { 00:06:21.098 "code": -32601, 00:06:21.098 "message": "Method not found" 00:06:21.098 } 00:06:21.098 22:17:34 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:21.098 22:17:34 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:21.098 22:17:34 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:21.098 22:17:34 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:21.098 22:17:34 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62310 00:06:21.098 22:17:34 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 62310 ']' 00:06:21.098 22:17:34 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 62310 00:06:21.098 22:17:34 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:21.098 22:17:34 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.098 22:17:34 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62310 00:06:21.098 killing process with pid 62310 00:06:21.098 22:17:34 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:21.098 22:17:34 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:21.098 22:17:34 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62310' 00:06:21.098 22:17:34 app_cmdline -- common/autotest_common.sh@967 -- # kill 62310 00:06:21.098 22:17:34 app_cmdline -- common/autotest_common.sh@972 -- # wait 62310 00:06:21.663 00:06:21.663 real 0m2.083s 00:06:21.663 user 0m2.258s 00:06:21.663 sys 0m0.601s 00:06:21.663 22:17:35 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.663 ************************************ 00:06:21.663 END TEST app_cmdline 00:06:21.663 ************************************ 00:06:21.663 22:17:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:21.920 22:17:35 -- common/autotest_common.sh@1142 -- # return 0 00:06:21.920 22:17:35 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:21.920 22:17:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.920 22:17:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.920 22:17:35 -- common/autotest_common.sh@10 -- # set +x 00:06:21.920 ************************************ 00:06:21.920 START TEST version 00:06:21.920 ************************************ 00:06:21.920 22:17:35 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:21.920 * Looking for test storage... 00:06:21.920 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:21.920 22:17:35 version -- app/version.sh@17 -- # get_header_version major 00:06:21.920 22:17:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:21.920 22:17:35 version -- app/version.sh@14 -- # tr -d '"' 00:06:21.920 22:17:35 version -- app/version.sh@14 -- # cut -f2 00:06:21.920 22:17:35 version -- app/version.sh@17 -- # major=24 00:06:21.920 22:17:35 version -- app/version.sh@18 -- # get_header_version minor 00:06:21.920 22:17:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:21.920 22:17:35 version -- app/version.sh@14 -- # cut -f2 00:06:21.920 22:17:35 version -- app/version.sh@14 -- # tr -d '"' 00:06:21.920 22:17:35 version -- app/version.sh@18 -- # minor=9 00:06:21.920 22:17:35 version -- app/version.sh@19 -- # get_header_version patch 00:06:21.920 22:17:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:21.920 22:17:35 version -- app/version.sh@14 -- # cut -f2 00:06:21.920 22:17:35 version -- app/version.sh@14 -- # tr -d '"' 00:06:21.920 22:17:35 version -- app/version.sh@19 -- # patch=0 00:06:21.920 22:17:35 version -- app/version.sh@20 -- # get_header_version suffix 00:06:21.920 22:17:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:21.920 22:17:35 version -- app/version.sh@14 -- # cut -f2 00:06:21.920 22:17:35 version -- app/version.sh@14 -- # tr -d '"' 00:06:21.920 22:17:35 version -- app/version.sh@20 -- # suffix=-pre 00:06:21.920 22:17:35 version -- app/version.sh@22 -- # version=24.9 00:06:21.920 22:17:35 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:21.920 22:17:35 version -- app/version.sh@28 -- # version=24.9rc0 00:06:21.920 22:17:35 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:21.920 22:17:35 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:22.178 22:17:35 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:22.178 22:17:35 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:22.178 00:06:22.178 real 0m0.219s 00:06:22.178 user 0m0.110s 00:06:22.178 sys 0m0.160s 00:06:22.178 ************************************ 00:06:22.178 END TEST version 00:06:22.178 ************************************ 00:06:22.178 22:17:35 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.178 22:17:35 version -- common/autotest_common.sh@10 -- # set +x 00:06:22.178 22:17:35 -- common/autotest_common.sh@1142 -- # return 0 00:06:22.178 22:17:35 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:22.178 22:17:35 -- spdk/autotest.sh@198 -- # uname -s 00:06:22.178 22:17:35 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:22.178 22:17:35 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:22.178 22:17:35 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:06:22.178 22:17:35 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:06:22.178 22:17:35 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:22.178 22:17:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.178 22:17:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.178 22:17:35 -- common/autotest_common.sh@10 -- # set +x 00:06:22.178 ************************************ 00:06:22.178 START TEST spdk_dd 00:06:22.178 ************************************ 00:06:22.178 22:17:35 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:22.178 * Looking for test storage... 00:06:22.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:22.178 22:17:35 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:22.178 22:17:35 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:22.178 22:17:35 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:22.178 22:17:35 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:22.178 22:17:35 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.178 22:17:35 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.178 22:17:35 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.178 22:17:35 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:22.178 22:17:35 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.178 22:17:35 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:22.780 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:22.780 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:22.780 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:22.780 22:17:36 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:22.780 22:17:36 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:22.780 22:17:36 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:06:22.780 22:17:36 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:06:22.780 22:17:36 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:06:22.780 22:17:36 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:22.780 22:17:36 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:06:22.780 22:17:36 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:06:22.780 22:17:36 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:06:22.780 22:17:36 spdk_dd -- scripts/common.sh@230 -- # local class 00:06:22.780 22:17:36 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:06:22.780 22:17:36 spdk_dd -- scripts/common.sh@232 -- # local progif 00:06:22.780 22:17:36 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:06:22.780 22:17:36 spdk_dd -- scripts/common.sh@233 -- # class=01 00:06:22.780 22:17:36 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:06:22.780 22:17:36 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:06:22.780 22:17:36 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:06:22.780 22:17:36 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:06:22.780 22:17:36 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:06:22.780 22:17:36 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:06:22.780 22:17:36 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:06:22.780 22:17:36 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:06:22.780 22:17:36 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:22.780 22:17:36 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:06:23.040 22:17:36 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:23.040 22:17:36 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:06:23.040 22:17:36 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:23.040 22:17:36 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:06:23.040 22:17:36 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:23.040 22:17:36 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:23.040 22:17:36 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:06:23.040 22:17:36 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:23.040 22:17:36 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:06:23.040 22:17:36 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:23.040 22:17:36 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:06:23.040 22:17:36 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:23.040 22:17:36 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:23.040 22:17:36 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:06:23.040 22:17:36 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:23.040 22:17:36 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:23.040 22:17:36 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:23.040 22:17:36 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:23.040 22:17:36 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:23.040 22:17:36 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:23.040 22:17:36 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:23.040 22:17:36 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:23.040 22:17:36 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:23.040 22:17:36 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:23.040 22:17:36 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:06:23.040 22:17:36 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:23.040 22:17:36 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@139 -- # local lib so 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.040 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.1 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.1 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.1 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:23.041 * spdk_dd linked to liburing 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:23.041 22:17:36 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:23.041 22:17:36 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:23.042 22:17:36 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:06:23.042 22:17:36 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:23.042 22:17:36 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:06:23.042 22:17:36 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:06:23.042 22:17:36 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:06:23.042 22:17:36 spdk_dd -- dd/common.sh@157 -- # return 0 00:06:23.042 22:17:36 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:23.042 22:17:36 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:23.042 22:17:36 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:23.042 22:17:36 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.042 22:17:36 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:23.042 ************************************ 00:06:23.042 START TEST spdk_dd_basic_rw 00:06:23.042 ************************************ 00:06:23.042 22:17:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:23.042 * Looking for test storage... 00:06:23.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:23.042 22:17:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:23.042 22:17:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.042 22:17:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.042 22:17:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.042 22:17:36 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.042 22:17:36 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.042 22:17:36 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.042 22:17:36 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:23.042 22:17:36 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.042 22:17:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:23.042 22:17:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:23.042 22:17:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:23.042 22:17:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:23.042 22:17:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:23.042 22:17:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:23.042 22:17:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:23.042 22:17:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:23.042 22:17:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:23.042 22:17:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:23.042 22:17:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:23.042 22:17:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:23.042 22:17:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:23.304 22:17:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:23.304 22:17:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:23.304 22:17:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:23.304 22:17:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:23.304 22:17:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:23.304 22:17:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:23.304 22:17:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:23.304 22:17:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:23.304 22:17:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:23.304 22:17:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:23.304 22:17:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.304 22:17:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:23.304 22:17:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:23.304 22:17:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:23.304 ************************************ 00:06:23.304 START TEST dd_bs_lt_native_bs 00:06:23.304 ************************************ 00:06:23.304 22:17:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:23.304 22:17:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:06:23.304 22:17:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:23.304 22:17:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:23.304 22:17:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.304 22:17:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:23.304 22:17:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.304 22:17:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:23.304 22:17:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.304 22:17:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:23.304 22:17:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:23.305 22:17:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:23.563 { 00:06:23.563 "subsystems": [ 00:06:23.563 { 00:06:23.564 "subsystem": "bdev", 00:06:23.564 "config": [ 00:06:23.564 { 00:06:23.564 "params": { 00:06:23.564 "trtype": "pcie", 00:06:23.564 "traddr": "0000:00:10.0", 00:06:23.564 "name": "Nvme0" 00:06:23.564 }, 00:06:23.564 "method": "bdev_nvme_attach_controller" 00:06:23.564 }, 00:06:23.564 { 00:06:23.564 "method": "bdev_wait_for_examine" 00:06:23.564 } 00:06:23.564 ] 00:06:23.564 } 00:06:23.564 ] 00:06:23.564 } 00:06:23.564 [2024-07-15 22:17:36.951308] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:23.564 [2024-07-15 22:17:36.951389] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62635 ] 00:06:23.564 [2024-07-15 22:17:37.087178] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.822 [2024-07-15 22:17:37.251771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.822 [2024-07-15 22:17:37.331260] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.822 [2024-07-15 22:17:37.454401] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:23.822 [2024-07-15 22:17:37.454477] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:24.081 [2024-07-15 22:17:37.623636] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:24.340 ************************************ 00:06:24.340 END TEST dd_bs_lt_native_bs 00:06:24.340 ************************************ 00:06:24.340 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:06:24.340 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:24.340 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:06:24.340 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:06:24.340 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:06:24.340 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:24.340 00:06:24.340 real 0m0.861s 00:06:24.340 user 0m0.589s 00:06:24.340 sys 0m0.228s 00:06:24.340 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.340 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:24.340 22:17:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:24.340 22:17:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:24.340 22:17:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:24.340 22:17:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.340 22:17:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:24.341 ************************************ 00:06:24.341 START TEST dd_rw 00:06:24.341 ************************************ 00:06:24.341 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:06:24.341 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:24.341 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:24.341 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:24.341 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:24.341 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:24.341 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:24.341 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:24.341 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:24.341 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:24.341 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:24.341 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:24.341 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:24.341 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:24.341 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:24.341 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:24.341 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:24.341 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:24.341 22:17:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:24.947 22:17:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:24.947 22:17:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:24.947 22:17:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:24.947 22:17:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:24.947 [2024-07-15 22:17:38.380397] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:24.947 [2024-07-15 22:17:38.380475] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62672 ] 00:06:24.947 { 00:06:24.947 "subsystems": [ 00:06:24.947 { 00:06:24.947 "subsystem": "bdev", 00:06:24.947 "config": [ 00:06:24.947 { 00:06:24.947 "params": { 00:06:24.947 "trtype": "pcie", 00:06:24.947 "traddr": "0000:00:10.0", 00:06:24.947 "name": "Nvme0" 00:06:24.947 }, 00:06:24.947 "method": "bdev_nvme_attach_controller" 00:06:24.947 }, 00:06:24.947 { 00:06:24.947 "method": "bdev_wait_for_examine" 00:06:24.947 } 00:06:24.947 ] 00:06:24.947 } 00:06:24.947 ] 00:06:24.947 } 00:06:24.947 [2024-07-15 22:17:38.522532] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.205 [2024-07-15 22:17:38.619225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.205 [2024-07-15 22:17:38.660323] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:25.464  Copying: 60/60 [kB] (average 19 MBps) 00:06:25.464 00:06:25.464 22:17:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:25.464 22:17:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:25.464 22:17:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:25.464 22:17:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:25.722 { 00:06:25.722 "subsystems": [ 00:06:25.722 { 00:06:25.722 "subsystem": "bdev", 00:06:25.722 "config": [ 00:06:25.722 { 00:06:25.722 "params": { 00:06:25.722 "trtype": "pcie", 00:06:25.722 "traddr": "0000:00:10.0", 00:06:25.722 "name": "Nvme0" 00:06:25.722 }, 00:06:25.722 "method": "bdev_nvme_attach_controller" 00:06:25.722 }, 00:06:25.722 { 00:06:25.722 "method": "bdev_wait_for_examine" 00:06:25.722 } 00:06:25.722 ] 00:06:25.722 } 00:06:25.722 ] 00:06:25.722 } 00:06:25.722 [2024-07-15 22:17:39.113483] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:25.722 [2024-07-15 22:17:39.113557] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62685 ] 00:06:25.722 [2024-07-15 22:17:39.256072] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.980 [2024-07-15 22:17:39.402943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.980 [2024-07-15 22:17:39.474372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:26.545  Copying: 60/60 [kB] (average 11 MBps) 00:06:26.545 00:06:26.545 22:17:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:26.545 22:17:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:26.545 22:17:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:26.545 22:17:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:26.545 22:17:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:26.545 22:17:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:26.545 22:17:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:26.545 22:17:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:26.545 22:17:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:26.545 22:17:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:26.545 22:17:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:26.545 { 00:06:26.545 "subsystems": [ 00:06:26.545 { 00:06:26.545 "subsystem": "bdev", 00:06:26.545 "config": [ 00:06:26.545 { 00:06:26.545 "params": { 00:06:26.545 "trtype": "pcie", 00:06:26.545 "traddr": "0000:00:10.0", 00:06:26.545 "name": "Nvme0" 00:06:26.545 }, 00:06:26.545 "method": "bdev_nvme_attach_controller" 00:06:26.545 }, 00:06:26.545 { 00:06:26.545 "method": "bdev_wait_for_examine" 00:06:26.545 } 00:06:26.545 ] 00:06:26.545 } 00:06:26.545 ] 00:06:26.545 } 00:06:26.545 [2024-07-15 22:17:39.952826] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:26.545 [2024-07-15 22:17:39.952897] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62701 ] 00:06:26.545 [2024-07-15 22:17:40.097057] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.803 [2024-07-15 22:17:40.244239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.803 [2024-07-15 22:17:40.320183] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:27.320  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:27.320 00:06:27.320 22:17:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:27.320 22:17:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:27.320 22:17:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:27.320 22:17:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:27.320 22:17:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:27.320 22:17:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:27.320 22:17:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:27.888 22:17:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:27.888 22:17:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:27.888 22:17:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:27.888 22:17:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:27.888 [2024-07-15 22:17:41.312728] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:27.888 [2024-07-15 22:17:41.312872] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62725 ] 00:06:27.888 { 00:06:27.888 "subsystems": [ 00:06:27.888 { 00:06:27.888 "subsystem": "bdev", 00:06:27.888 "config": [ 00:06:27.888 { 00:06:27.888 "params": { 00:06:27.888 "trtype": "pcie", 00:06:27.888 "traddr": "0000:00:10.0", 00:06:27.888 "name": "Nvme0" 00:06:27.888 }, 00:06:27.888 "method": "bdev_nvme_attach_controller" 00:06:27.888 }, 00:06:27.888 { 00:06:27.888 "method": "bdev_wait_for_examine" 00:06:27.888 } 00:06:27.888 ] 00:06:27.888 } 00:06:27.888 ] 00:06:27.888 } 00:06:27.888 [2024-07-15 22:17:41.462236] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.146 [2024-07-15 22:17:41.618301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.146 [2024-07-15 22:17:41.692059] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:28.665  Copying: 60/60 [kB] (average 58 MBps) 00:06:28.665 00:06:28.665 22:17:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:28.665 22:17:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:28.665 22:17:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:28.665 22:17:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:28.665 [2024-07-15 22:17:42.176704] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:28.665 [2024-07-15 22:17:42.176787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62739 ] 00:06:28.665 { 00:06:28.665 "subsystems": [ 00:06:28.665 { 00:06:28.665 "subsystem": "bdev", 00:06:28.665 "config": [ 00:06:28.665 { 00:06:28.665 "params": { 00:06:28.665 "trtype": "pcie", 00:06:28.665 "traddr": "0000:00:10.0", 00:06:28.665 "name": "Nvme0" 00:06:28.665 }, 00:06:28.665 "method": "bdev_nvme_attach_controller" 00:06:28.665 }, 00:06:28.665 { 00:06:28.665 "method": "bdev_wait_for_examine" 00:06:28.665 } 00:06:28.665 ] 00:06:28.665 } 00:06:28.665 ] 00:06:28.665 } 00:06:28.924 [2024-07-15 22:17:42.319417] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.924 [2024-07-15 22:17:42.478575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.183 [2024-07-15 22:17:42.559865] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:29.446  Copying: 60/60 [kB] (average 29 MBps) 00:06:29.446 00:06:29.446 22:17:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:29.446 22:17:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:29.446 22:17:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:29.446 22:17:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:29.446 22:17:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:29.446 22:17:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:29.446 22:17:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:29.446 22:17:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:29.446 22:17:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:29.446 22:17:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:29.446 22:17:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:29.446 [2024-07-15 22:17:43.077934] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:29.446 [2024-07-15 22:17:43.078229] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62761 ] 00:06:29.704 { 00:06:29.704 "subsystems": [ 00:06:29.704 { 00:06:29.704 "subsystem": "bdev", 00:06:29.704 "config": [ 00:06:29.704 { 00:06:29.704 "params": { 00:06:29.704 "trtype": "pcie", 00:06:29.704 "traddr": "0000:00:10.0", 00:06:29.704 "name": "Nvme0" 00:06:29.704 }, 00:06:29.704 "method": "bdev_nvme_attach_controller" 00:06:29.704 }, 00:06:29.704 { 00:06:29.704 "method": "bdev_wait_for_examine" 00:06:29.704 } 00:06:29.704 ] 00:06:29.704 } 00:06:29.704 ] 00:06:29.704 } 00:06:29.704 [2024-07-15 22:17:43.222139] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.962 [2024-07-15 22:17:43.376406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.962 [2024-07-15 22:17:43.451193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:30.528  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:30.528 00:06:30.528 22:17:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:30.528 22:17:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:30.528 22:17:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:30.528 22:17:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:30.528 22:17:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:30.528 22:17:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:30.528 22:17:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:30.528 22:17:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:30.785 22:17:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:30.785 22:17:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:30.785 22:17:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:30.785 22:17:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:30.785 [2024-07-15 22:17:44.396985] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:30.785 [2024-07-15 22:17:44.397075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62781 ] 00:06:30.785 { 00:06:30.785 "subsystems": [ 00:06:30.785 { 00:06:30.785 "subsystem": "bdev", 00:06:30.785 "config": [ 00:06:30.785 { 00:06:30.785 "params": { 00:06:30.785 "trtype": "pcie", 00:06:30.785 "traddr": "0000:00:10.0", 00:06:30.785 "name": "Nvme0" 00:06:30.785 }, 00:06:30.785 "method": "bdev_nvme_attach_controller" 00:06:30.785 }, 00:06:30.785 { 00:06:30.785 "method": "bdev_wait_for_examine" 00:06:30.785 } 00:06:30.785 ] 00:06:30.785 } 00:06:30.785 ] 00:06:30.785 } 00:06:31.042 [2024-07-15 22:17:44.535480] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.301 [2024-07-15 22:17:44.685289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.301 [2024-07-15 22:17:44.757790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:31.559  Copying: 56/56 [kB] (average 54 MBps) 00:06:31.559 00:06:31.559 22:17:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:31.559 22:17:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:31.559 22:17:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:31.559 22:17:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:31.860 [2024-07-15 22:17:45.235401] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:31.860 [2024-07-15 22:17:45.235480] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62799 ] 00:06:31.860 { 00:06:31.860 "subsystems": [ 00:06:31.860 { 00:06:31.860 "subsystem": "bdev", 00:06:31.860 "config": [ 00:06:31.860 { 00:06:31.860 "params": { 00:06:31.860 "trtype": "pcie", 00:06:31.860 "traddr": "0000:00:10.0", 00:06:31.860 "name": "Nvme0" 00:06:31.860 }, 00:06:31.860 "method": "bdev_nvme_attach_controller" 00:06:31.860 }, 00:06:31.860 { 00:06:31.860 "method": "bdev_wait_for_examine" 00:06:31.860 } 00:06:31.860 ] 00:06:31.860 } 00:06:31.860 ] 00:06:31.860 } 00:06:31.860 [2024-07-15 22:17:45.378039] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.118 [2024-07-15 22:17:45.526039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.118 [2024-07-15 22:17:45.599459] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:32.689  Copying: 56/56 [kB] (average 27 MBps) 00:06:32.689 00:06:32.689 22:17:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:32.689 22:17:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:32.689 22:17:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:32.689 22:17:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:32.689 22:17:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:32.689 22:17:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:32.689 22:17:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:32.689 22:17:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:32.689 22:17:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:32.689 22:17:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:32.689 22:17:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:32.689 [2024-07-15 22:17:46.084471] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:32.689 [2024-07-15 22:17:46.084778] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62820 ] 00:06:32.689 { 00:06:32.689 "subsystems": [ 00:06:32.689 { 00:06:32.689 "subsystem": "bdev", 00:06:32.689 "config": [ 00:06:32.689 { 00:06:32.689 "params": { 00:06:32.689 "trtype": "pcie", 00:06:32.689 "traddr": "0000:00:10.0", 00:06:32.689 "name": "Nvme0" 00:06:32.689 }, 00:06:32.689 "method": "bdev_nvme_attach_controller" 00:06:32.689 }, 00:06:32.689 { 00:06:32.689 "method": "bdev_wait_for_examine" 00:06:32.689 } 00:06:32.689 ] 00:06:32.689 } 00:06:32.689 ] 00:06:32.689 } 00:06:32.689 [2024-07-15 22:17:46.226506] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.947 [2024-07-15 22:17:46.374052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.948 [2024-07-15 22:17:46.447546] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.515  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:33.515 00:06:33.515 22:17:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:33.515 22:17:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:33.515 22:17:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:33.515 22:17:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:33.515 22:17:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:33.515 22:17:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:33.515 22:17:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:33.773 22:17:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:33.773 22:17:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:33.773 22:17:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:33.773 22:17:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:33.773 [2024-07-15 22:17:47.392559] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:33.773 [2024-07-15 22:17:47.392650] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62839 ] 00:06:33.773 { 00:06:33.773 "subsystems": [ 00:06:33.773 { 00:06:33.773 "subsystem": "bdev", 00:06:33.773 "config": [ 00:06:33.773 { 00:06:33.773 "params": { 00:06:33.773 "trtype": "pcie", 00:06:33.773 "traddr": "0000:00:10.0", 00:06:33.773 "name": "Nvme0" 00:06:33.773 }, 00:06:33.773 "method": "bdev_nvme_attach_controller" 00:06:33.773 }, 00:06:33.773 { 00:06:33.773 "method": "bdev_wait_for_examine" 00:06:33.773 } 00:06:33.773 ] 00:06:33.773 } 00:06:33.773 ] 00:06:33.773 } 00:06:34.032 [2024-07-15 22:17:47.534327] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.291 [2024-07-15 22:17:47.689802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.291 [2024-07-15 22:17:47.769622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:34.860  Copying: 56/56 [kB] (average 54 MBps) 00:06:34.860 00:06:34.860 22:17:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:34.860 22:17:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:34.860 22:17:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:34.860 22:17:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:34.860 [2024-07-15 22:17:48.256902] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:34.860 [2024-07-15 22:17:48.256976] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62858 ] 00:06:34.860 { 00:06:34.860 "subsystems": [ 00:06:34.860 { 00:06:34.860 "subsystem": "bdev", 00:06:34.860 "config": [ 00:06:34.860 { 00:06:34.860 "params": { 00:06:34.860 "trtype": "pcie", 00:06:34.860 "traddr": "0000:00:10.0", 00:06:34.860 "name": "Nvme0" 00:06:34.860 }, 00:06:34.860 "method": "bdev_nvme_attach_controller" 00:06:34.860 }, 00:06:34.860 { 00:06:34.860 "method": "bdev_wait_for_examine" 00:06:34.860 } 00:06:34.860 ] 00:06:34.860 } 00:06:34.860 ] 00:06:34.860 } 00:06:34.860 [2024-07-15 22:17:48.400798] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.120 [2024-07-15 22:17:48.549797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.120 [2024-07-15 22:17:48.626378] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:35.704  Copying: 56/56 [kB] (average 54 MBps) 00:06:35.704 00:06:35.704 22:17:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:35.704 22:17:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:35.704 22:17:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:35.704 22:17:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:35.704 22:17:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:35.704 22:17:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:35.704 22:17:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:35.704 22:17:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:35.704 22:17:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:35.704 22:17:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:35.704 22:17:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:35.704 [2024-07-15 22:17:49.120965] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:35.704 [2024-07-15 22:17:49.121068] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62879 ] 00:06:35.704 { 00:06:35.704 "subsystems": [ 00:06:35.704 { 00:06:35.704 "subsystem": "bdev", 00:06:35.704 "config": [ 00:06:35.704 { 00:06:35.704 "params": { 00:06:35.704 "trtype": "pcie", 00:06:35.704 "traddr": "0000:00:10.0", 00:06:35.704 "name": "Nvme0" 00:06:35.704 }, 00:06:35.704 "method": "bdev_nvme_attach_controller" 00:06:35.704 }, 00:06:35.704 { 00:06:35.704 "method": "bdev_wait_for_examine" 00:06:35.704 } 00:06:35.704 ] 00:06:35.704 } 00:06:35.704 ] 00:06:35.704 } 00:06:35.704 [2024-07-15 22:17:49.264630] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.967 [2024-07-15 22:17:49.418520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.967 [2024-07-15 22:17:49.495719] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:36.487  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:36.487 00:06:36.487 22:17:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:36.487 22:17:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:36.487 22:17:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:36.487 22:17:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:36.487 22:17:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:36.487 22:17:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:36.487 22:17:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:36.487 22:17:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:36.745 22:17:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:36.745 22:17:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:36.745 22:17:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:36.745 22:17:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:36.745 [2024-07-15 22:17:50.377996] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:36.745 [2024-07-15 22:17:50.378084] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62898 ] 00:06:37.003 { 00:06:37.003 "subsystems": [ 00:06:37.003 { 00:06:37.003 "subsystem": "bdev", 00:06:37.003 "config": [ 00:06:37.003 { 00:06:37.003 "params": { 00:06:37.003 "trtype": "pcie", 00:06:37.003 "traddr": "0000:00:10.0", 00:06:37.003 "name": "Nvme0" 00:06:37.003 }, 00:06:37.003 "method": "bdev_nvme_attach_controller" 00:06:37.003 }, 00:06:37.003 { 00:06:37.003 "method": "bdev_wait_for_examine" 00:06:37.003 } 00:06:37.003 ] 00:06:37.003 } 00:06:37.003 ] 00:06:37.003 } 00:06:37.003 [2024-07-15 22:17:50.521991] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.261 [2024-07-15 22:17:50.682720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.261 [2024-07-15 22:17:50.756811] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:37.825  Copying: 48/48 [kB] (average 46 MBps) 00:06:37.825 00:06:37.825 22:17:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:37.825 22:17:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:37.825 22:17:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:37.825 22:17:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:37.825 [2024-07-15 22:17:51.240898] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:37.825 [2024-07-15 22:17:51.240980] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62917 ] 00:06:37.825 { 00:06:37.825 "subsystems": [ 00:06:37.825 { 00:06:37.825 "subsystem": "bdev", 00:06:37.825 "config": [ 00:06:37.825 { 00:06:37.825 "params": { 00:06:37.825 "trtype": "pcie", 00:06:37.825 "traddr": "0000:00:10.0", 00:06:37.825 "name": "Nvme0" 00:06:37.825 }, 00:06:37.825 "method": "bdev_nvme_attach_controller" 00:06:37.825 }, 00:06:37.825 { 00:06:37.825 "method": "bdev_wait_for_examine" 00:06:37.825 } 00:06:37.825 ] 00:06:37.825 } 00:06:37.825 ] 00:06:37.825 } 00:06:37.825 [2024-07-15 22:17:51.384541] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.082 [2024-07-15 22:17:51.530954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.082 [2024-07-15 22:17:51.603869] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:38.597  Copying: 48/48 [kB] (average 46 MBps) 00:06:38.597 00:06:38.597 22:17:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:38.597 22:17:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:38.597 22:17:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:38.597 22:17:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:38.597 22:17:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:38.597 22:17:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:38.597 22:17:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:38.597 22:17:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:38.597 22:17:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:38.597 22:17:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:38.597 22:17:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:38.597 { 00:06:38.597 "subsystems": [ 00:06:38.597 { 00:06:38.597 "subsystem": "bdev", 00:06:38.597 "config": [ 00:06:38.597 { 00:06:38.597 "params": { 00:06:38.597 "trtype": "pcie", 00:06:38.597 "traddr": "0000:00:10.0", 00:06:38.597 "name": "Nvme0" 00:06:38.597 }, 00:06:38.597 "method": "bdev_nvme_attach_controller" 00:06:38.597 }, 00:06:38.597 { 00:06:38.597 "method": "bdev_wait_for_examine" 00:06:38.597 } 00:06:38.597 ] 00:06:38.597 } 00:06:38.597 ] 00:06:38.597 } 00:06:38.597 [2024-07-15 22:17:52.084195] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:38.597 [2024-07-15 22:17:52.084274] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62938 ] 00:06:38.598 [2024-07-15 22:17:52.228097] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.855 [2024-07-15 22:17:52.378990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.855 [2024-07-15 22:17:52.453453] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:39.369  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:39.369 00:06:39.369 22:17:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:39.369 22:17:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:39.369 22:17:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:39.369 22:17:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:39.369 22:17:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:39.369 22:17:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:39.369 22:17:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:39.934 22:17:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:39.934 22:17:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:39.934 22:17:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:39.934 22:17:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:39.934 [2024-07-15 22:17:53.341550] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:39.934 [2024-07-15 22:17:53.341646] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62957 ] 00:06:39.934 { 00:06:39.934 "subsystems": [ 00:06:39.934 { 00:06:39.934 "subsystem": "bdev", 00:06:39.934 "config": [ 00:06:39.934 { 00:06:39.934 "params": { 00:06:39.934 "trtype": "pcie", 00:06:39.934 "traddr": "0000:00:10.0", 00:06:39.934 "name": "Nvme0" 00:06:39.934 }, 00:06:39.934 "method": "bdev_nvme_attach_controller" 00:06:39.934 }, 00:06:39.934 { 00:06:39.934 "method": "bdev_wait_for_examine" 00:06:39.934 } 00:06:39.934 ] 00:06:39.934 } 00:06:39.934 ] 00:06:39.934 } 00:06:39.935 [2024-07-15 22:17:53.474939] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.192 [2024-07-15 22:17:53.624278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.192 [2024-07-15 22:17:53.697906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:40.756  Copying: 48/48 [kB] (average 46 MBps) 00:06:40.756 00:06:40.756 22:17:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:40.756 22:17:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:40.756 22:17:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:40.756 22:17:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:40.756 { 00:06:40.756 "subsystems": [ 00:06:40.756 { 00:06:40.756 "subsystem": "bdev", 00:06:40.756 "config": [ 00:06:40.756 { 00:06:40.756 "params": { 00:06:40.756 "trtype": "pcie", 00:06:40.756 "traddr": "0000:00:10.0", 00:06:40.756 "name": "Nvme0" 00:06:40.756 }, 00:06:40.756 "method": "bdev_nvme_attach_controller" 00:06:40.756 }, 00:06:40.756 { 00:06:40.756 "method": "bdev_wait_for_examine" 00:06:40.756 } 00:06:40.756 ] 00:06:40.756 } 00:06:40.756 ] 00:06:40.756 } 00:06:40.756 [2024-07-15 22:17:54.188421] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:40.756 [2024-07-15 22:17:54.188505] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62976 ] 00:06:40.756 [2024-07-15 22:17:54.331485] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.013 [2024-07-15 22:17:54.481267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.013 [2024-07-15 22:17:54.557533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.532  Copying: 48/48 [kB] (average 46 MBps) 00:06:41.532 00:06:41.532 22:17:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:41.532 22:17:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:41.532 22:17:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:41.532 22:17:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:41.532 22:17:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:41.532 22:17:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:41.532 22:17:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:41.532 22:17:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:41.532 22:17:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:41.532 22:17:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:41.532 22:17:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:41.532 { 00:06:41.532 "subsystems": [ 00:06:41.532 { 00:06:41.532 "subsystem": "bdev", 00:06:41.532 "config": [ 00:06:41.532 { 00:06:41.532 "params": { 00:06:41.532 "trtype": "pcie", 00:06:41.532 "traddr": "0000:00:10.0", 00:06:41.532 "name": "Nvme0" 00:06:41.532 }, 00:06:41.532 "method": "bdev_nvme_attach_controller" 00:06:41.532 }, 00:06:41.532 { 00:06:41.532 "method": "bdev_wait_for_examine" 00:06:41.532 } 00:06:41.532 ] 00:06:41.532 } 00:06:41.532 ] 00:06:41.532 } 00:06:41.532 [2024-07-15 22:17:55.067821] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:41.532 [2024-07-15 22:17:55.067898] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62992 ] 00:06:41.791 [2024-07-15 22:17:55.212333] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.791 [2024-07-15 22:17:55.362188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.050 [2024-07-15 22:17:55.438536] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.309  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:42.309 00:06:42.309 00:06:42.309 real 0m18.051s 00:06:42.309 user 0m12.950s 00:06:42.309 sys 0m7.439s 00:06:42.309 22:17:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.309 22:17:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:42.309 ************************************ 00:06:42.309 END TEST dd_rw 00:06:42.309 ************************************ 00:06:42.568 22:17:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:42.568 22:17:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:42.568 22:17:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.568 22:17:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.568 22:17:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:42.568 ************************************ 00:06:42.568 START TEST dd_rw_offset 00:06:42.568 ************************************ 00:06:42.568 22:17:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:06:42.568 22:17:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:42.568 22:17:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:42.568 22:17:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:42.568 22:17:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:42.568 22:17:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:42.568 22:17:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=27d97n0rvquk2jaiuftivtr02y152zbbw7x4jf9jldmokflw8j0tie75nbfvheo2cws973r9dhnlpau7sbcd51ho1ihpwjbggpl8j47v4x629t0k54o2dsm8vq5ugetygr9t7vrnwe9hqctx6fwf3g2np9pkdgguhk47ahr0kp1wrw9yebvt7cc3lg6fiuuhtjsdygrqakup8g63gcss6xqe9bwn7oqalr49jis7bsaq62vgjs8nwu02pm9rvlfh11le0hgqkltf73oyeo0pjusuwqpa0e50texder0x6c8iwoc60tqs5udg6ycmfx707lqdf06zgu00n6cpwk241snp0l68s0ak9qmetq9k56lpnfz1v8dtk1ww0y3ji32tnrzcnaah73hjlir7coxw9mn2wp7g8624r1di93bcnv8jt3vs6nvby77b2td6m3py3wokmvp9ogh71nn56zn1ccjth6jw4vmcsw3h25u9p5r00d0swgmfjw5v6ibgue8pa5oglzl4e35zteexsp9regz740igbuaa74mzsb6z3t4awxo2jv87l6gi8qin1qxdyga8oatd2mxo7jk348wtczmr1i3agjj8mdetuz5kn6iuuu8jyyxv7wr0f141khh6iiuyby4k97bj3sjswgifap8c45eartl5wqh9zabx40vy8faxciq6kjf3m61ephxd6m98ei3pqo9ftxj7jj04152h9yzgy2t4uduaeo6ep139bxcer1czlr1f6uz0ei8tos63qugym9zz6imxj3i6rrsao1qb1v0q85leqgellp5hzfg7h1f9vqfpd36nwsfpmnai28cjpgpu5czwzjxq9l3m6ng6a304524wg7dz73frr8tq258p11eby4pdaofz5n346tzt5n4ed8668n1kkuj1ccxrnojes5y1fimqjrqne2medrre8hrg5vt8zm60jw2huoztzu5jwacyf7mb3yxstrhz0zxwx7iw10e8q1khi6bwsm3zlzha72ian3dllrgqmnpccgt8vtpzp4j910qwvpypbqz6x6cajdsvofmtqk3v5d9t253syhfl4y86lqm9hqs8lokq4yww27p9pbkd1cocqv21nezz77kf44ou7wui8nfa3taakbj1qqneplu96ulvc5tuxkm6tcxuswye68p4d04ysvg8o96h0vteqcaggjfjz1ojyvo5ebkc9nkwbzi5vgra5d2u6as07nxculdoh3pzctsx3bwmmv7xn6ebu82bub4eo4vqpf511kuk090ko25nu41fnqn6x3201zgoxyqk1b4jaqx2h6m3cu9rono2ibp16qndwzcqoiehv5xnosh8phpy4vmhmgxzamvtyo5wsqg24cwqb8onl34o8l3lx88rj2762wn0k0pgsl1f4millgc3w7rhgadytwbwkisaeznakfuj0te78hn4lqd7yuzyxch9af2ljaex77t27xcj98d9l6hdsk3mzmd8ir18d1ua8t5re6a6v70hew4xrif1zs3mdvj4z4chwwqobp3zxkvw5zs09v05mg4owxxua0rt6bvyzlk9xemr5ao14judrsvurfw9wlpya5744m4y8t0rq80nd00tcjgz4p90aq4bzmvtaifqu6n1fnzjc4kwabpkax0hbzqstut6cu4ojllcb57tja9tnc6kff1zzmirp9thf1v3rh1qblufdhep344gtxekm0et5avwmnhc92ysizkpe0zsdgluy6s0bzdvh29fmbc47z5vr3p2kfgm0h1jt3u2vxzxqv0un6m90wtk4v0gmioen98kax3sm4ruw1dcfgzk8ahiujps80afuqyhj8fs78fa68it3ut3da70sgp44xkertxjyw9tc2fb3hav3hiaanipjv9o1m69mog016c4qoauu7pe26lnmslc786e6anye9cxoz73sdu4gah4o58zlprgk42wce4uvd7s52zi83vbvr5y8mtp6v7b5smlwww8xgrz2iauyj6in1o0jwgdkqd81o85ezlajtg25izny1u1zwn92n2jiagdhw1823dzuob95vvgaxmka4ywm4ym9k0vgmljibhcg1wi6jt9ikmsp2tra03cadkiwbjfljnqppl6r9brbbnygxqkpajlwhg097wciwy2yer50aqol45xoyjoba4flnj5ebir3k78mnmvzmcta1rwqslhyq3l1ly8gi4lu4ha0fqdgzrf6nl13qyuami4lap45ctdd5cwv5cjeb1l5n8nfkc8dsuv83k5u4lrh4xy2yle2xhmen6j6x8a4h6cthmnwt8q1hqav6fzq33qbkpx8xiw9xkav5u7faoqt8heu0723dkwse53014l33xevfv62xb080lfeqx1s1x4btprdldxvuk56jg6n4vh1j764l02dq1z398vddwv5g1bliec5zx1kkfnaqdhyhu22o4x4r1ftdb5qvz91yq62zufwrj9xlgpwow4ssfatirs911bc1ectbkj0xk1rmlobfs2b56da38xdd7tud8gmbburv284b4mm92etzv1v92xg69rbpjcqbja4gddv0qgixt5f551eicevlx0ncf8s6bc8y9fb6exld6ybuaco2hr8elxu4jx4d80w4obcsua8hxszmwfv46odzgup6yvk22a3v2aakd7rqx7vu1m7bqa98gqq16m20kgu1dl00i8m5rua4o4oz406rz90hf8t4qda12u6iqmgx3oci1pqdvtpoc4l1nus4x6cbgko5iuc2qa00j89t6nedb9yskb7xs6ylscxt2pednn0gxvn93d3vd8l3zggka04lml4hbgixhzf2vn101nlh39g05sd1sm4iu17dgg1ipj761winusgxcobhf73i08xm00so04gka4khujxiijcbzyyiafxtz33ip3hua2e8nmh0n8q8ijzw7slj9t06x4a1vg3sglkbt3maotmx7p2vzvn8xqp77ovq8nkdawqr45nw38fkqinzh0cl7e5aiz6j185v3uq7vts0rhws6na9u9bbhe1jlqdahymvmlx6uati3nh5f5ajy7a95onp89xpip2tgyen08xxgrznyk55du1rudbmtj7iarjzcl9vvm5qqp8p97r9cr5wetesgc3wr54du2okb9nke32yq2kciz531pgrsmac4fyrzlmmrs4ww4b9rcryfzko8m5mjs56ryb5jmmquoduwh8r65wbdm4vsvrcoito3x0os9v8c1xf9tuqjfu2dudh1z3rq2qlaidn2wcp70ggq5kywxavu8j9v4fandkk4pn7ziv30zsprv66durwxf47j9rraakmw7qkc84guyx8m1chc8pa757upqwsjwx3fpiemjgupo70t3j8k3c0ylm0jlj8mmj9zwprsok3nyuqjqqh5srqf4e6rbteem3r81mrd60j6zi26q3pnb8ysdi16ab2hyra50s0tow3t60ixttfntjntyhh4bfvxl7e6ry8yr33emy495ewhj1b2pg7yp3i2wc4wobwm9b70jvdg8z59o8iwgavtgb5s6m77kd6di51rfd44kb6zq30cpt4g8lh18cj87f7q6ig0z4x3u44ayfjkpo16ei0zwed3pw90zal2umsefa915fz15sf54hxgof705lo37zhrfhc9i58pk22v74qq7ch1qxs0cbw0xoywk1672d4hywtrrukt5pho1nvm3sqvkrot9l5qpwfyivenfjtkjs0n7hez8nrsmdmq8ykcovbbd4xlm98krmz8rolulzydjov82zb6e7jtjqvtth9xdzhda8rv3g0ilbsu6datqabl7j2uhet86yaej9lnagd1w0s9g4cl1r2r2syk1qwdjudepu7g7q9btjz43b41nosk5ctlj7a38xi1wnwm52nsm7ludhsuq4amzj67c62k8qlkjzr48buoa1p938u5h47ogjj2plqatwrtdgsym6rdov3lgy2xdi02hohola63gaa3er4syli9h3zmqu48gtchz40a985g18srcoulvxplnnxb4x00gqc7pgw008tgknih16dkx76hg 00:06:42.568 22:17:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:42.568 22:17:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:42.568 22:17:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:42.568 22:17:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:42.568 [2024-07-15 22:17:56.053520] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:42.568 [2024-07-15 22:17:56.053613] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63028 ] 00:06:42.568 { 00:06:42.568 "subsystems": [ 00:06:42.568 { 00:06:42.568 "subsystem": "bdev", 00:06:42.568 "config": [ 00:06:42.568 { 00:06:42.568 "params": { 00:06:42.568 "trtype": "pcie", 00:06:42.568 "traddr": "0000:00:10.0", 00:06:42.568 "name": "Nvme0" 00:06:42.568 }, 00:06:42.568 "method": "bdev_nvme_attach_controller" 00:06:42.568 }, 00:06:42.568 { 00:06:42.568 "method": "bdev_wait_for_examine" 00:06:42.568 } 00:06:42.568 ] 00:06:42.568 } 00:06:42.568 ] 00:06:42.568 } 00:06:42.568 [2024-07-15 22:17:56.191447] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.827 [2024-07-15 22:17:56.343381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.828 [2024-07-15 22:17:56.418413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.345  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:43.345 00:06:43.345 22:17:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:43.345 22:17:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:43.345 22:17:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:43.345 22:17:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:43.345 [2024-07-15 22:17:56.905088] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:43.345 [2024-07-15 22:17:56.905193] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63047 ] 00:06:43.345 { 00:06:43.345 "subsystems": [ 00:06:43.345 { 00:06:43.345 "subsystem": "bdev", 00:06:43.345 "config": [ 00:06:43.345 { 00:06:43.345 "params": { 00:06:43.345 "trtype": "pcie", 00:06:43.345 "traddr": "0000:00:10.0", 00:06:43.345 "name": "Nvme0" 00:06:43.345 }, 00:06:43.345 "method": "bdev_nvme_attach_controller" 00:06:43.345 }, 00:06:43.345 { 00:06:43.345 "method": "bdev_wait_for_examine" 00:06:43.345 } 00:06:43.345 ] 00:06:43.345 } 00:06:43.345 ] 00:06:43.345 } 00:06:43.603 [2024-07-15 22:17:57.046533] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.603 [2024-07-15 22:17:57.197990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.862 [2024-07-15 22:17:57.277048] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:44.121  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:44.121 00:06:44.121 22:17:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:44.122 22:17:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 27d97n0rvquk2jaiuftivtr02y152zbbw7x4jf9jldmokflw8j0tie75nbfvheo2cws973r9dhnlpau7sbcd51ho1ihpwjbggpl8j47v4x629t0k54o2dsm8vq5ugetygr9t7vrnwe9hqctx6fwf3g2np9pkdgguhk47ahr0kp1wrw9yebvt7cc3lg6fiuuhtjsdygrqakup8g63gcss6xqe9bwn7oqalr49jis7bsaq62vgjs8nwu02pm9rvlfh11le0hgqkltf73oyeo0pjusuwqpa0e50texder0x6c8iwoc60tqs5udg6ycmfx707lqdf06zgu00n6cpwk241snp0l68s0ak9qmetq9k56lpnfz1v8dtk1ww0y3ji32tnrzcnaah73hjlir7coxw9mn2wp7g8624r1di93bcnv8jt3vs6nvby77b2td6m3py3wokmvp9ogh71nn56zn1ccjth6jw4vmcsw3h25u9p5r00d0swgmfjw5v6ibgue8pa5oglzl4e35zteexsp9regz740igbuaa74mzsb6z3t4awxo2jv87l6gi8qin1qxdyga8oatd2mxo7jk348wtczmr1i3agjj8mdetuz5kn6iuuu8jyyxv7wr0f141khh6iiuyby4k97bj3sjswgifap8c45eartl5wqh9zabx40vy8faxciq6kjf3m61ephxd6m98ei3pqo9ftxj7jj04152h9yzgy2t4uduaeo6ep139bxcer1czlr1f6uz0ei8tos63qugym9zz6imxj3i6rrsao1qb1v0q85leqgellp5hzfg7h1f9vqfpd36nwsfpmnai28cjpgpu5czwzjxq9l3m6ng6a304524wg7dz73frr8tq258p11eby4pdaofz5n346tzt5n4ed8668n1kkuj1ccxrnojes5y1fimqjrqne2medrre8hrg5vt8zm60jw2huoztzu5jwacyf7mb3yxstrhz0zxwx7iw10e8q1khi6bwsm3zlzha72ian3dllrgqmnpccgt8vtpzp4j910qwvpypbqz6x6cajdsvofmtqk3v5d9t253syhfl4y86lqm9hqs8lokq4yww27p9pbkd1cocqv21nezz77kf44ou7wui8nfa3taakbj1qqneplu96ulvc5tuxkm6tcxuswye68p4d04ysvg8o96h0vteqcaggjfjz1ojyvo5ebkc9nkwbzi5vgra5d2u6as07nxculdoh3pzctsx3bwmmv7xn6ebu82bub4eo4vqpf511kuk090ko25nu41fnqn6x3201zgoxyqk1b4jaqx2h6m3cu9rono2ibp16qndwzcqoiehv5xnosh8phpy4vmhmgxzamvtyo5wsqg24cwqb8onl34o8l3lx88rj2762wn0k0pgsl1f4millgc3w7rhgadytwbwkisaeznakfuj0te78hn4lqd7yuzyxch9af2ljaex77t27xcj98d9l6hdsk3mzmd8ir18d1ua8t5re6a6v70hew4xrif1zs3mdvj4z4chwwqobp3zxkvw5zs09v05mg4owxxua0rt6bvyzlk9xemr5ao14judrsvurfw9wlpya5744m4y8t0rq80nd00tcjgz4p90aq4bzmvtaifqu6n1fnzjc4kwabpkax0hbzqstut6cu4ojllcb57tja9tnc6kff1zzmirp9thf1v3rh1qblufdhep344gtxekm0et5avwmnhc92ysizkpe0zsdgluy6s0bzdvh29fmbc47z5vr3p2kfgm0h1jt3u2vxzxqv0un6m90wtk4v0gmioen98kax3sm4ruw1dcfgzk8ahiujps80afuqyhj8fs78fa68it3ut3da70sgp44xkertxjyw9tc2fb3hav3hiaanipjv9o1m69mog016c4qoauu7pe26lnmslc786e6anye9cxoz73sdu4gah4o58zlprgk42wce4uvd7s52zi83vbvr5y8mtp6v7b5smlwww8xgrz2iauyj6in1o0jwgdkqd81o85ezlajtg25izny1u1zwn92n2jiagdhw1823dzuob95vvgaxmka4ywm4ym9k0vgmljibhcg1wi6jt9ikmsp2tra03cadkiwbjfljnqppl6r9brbbnygxqkpajlwhg097wciwy2yer50aqol45xoyjoba4flnj5ebir3k78mnmvzmcta1rwqslhyq3l1ly8gi4lu4ha0fqdgzrf6nl13qyuami4lap45ctdd5cwv5cjeb1l5n8nfkc8dsuv83k5u4lrh4xy2yle2xhmen6j6x8a4h6cthmnwt8q1hqav6fzq33qbkpx8xiw9xkav5u7faoqt8heu0723dkwse53014l33xevfv62xb080lfeqx1s1x4btprdldxvuk56jg6n4vh1j764l02dq1z398vddwv5g1bliec5zx1kkfnaqdhyhu22o4x4r1ftdb5qvz91yq62zufwrj9xlgpwow4ssfatirs911bc1ectbkj0xk1rmlobfs2b56da38xdd7tud8gmbburv284b4mm92etzv1v92xg69rbpjcqbja4gddv0qgixt5f551eicevlx0ncf8s6bc8y9fb6exld6ybuaco2hr8elxu4jx4d80w4obcsua8hxszmwfv46odzgup6yvk22a3v2aakd7rqx7vu1m7bqa98gqq16m20kgu1dl00i8m5rua4o4oz406rz90hf8t4qda12u6iqmgx3oci1pqdvtpoc4l1nus4x6cbgko5iuc2qa00j89t6nedb9yskb7xs6ylscxt2pednn0gxvn93d3vd8l3zggka04lml4hbgixhzf2vn101nlh39g05sd1sm4iu17dgg1ipj761winusgxcobhf73i08xm00so04gka4khujxiijcbzyyiafxtz33ip3hua2e8nmh0n8q8ijzw7slj9t06x4a1vg3sglkbt3maotmx7p2vzvn8xqp77ovq8nkdawqr45nw38fkqinzh0cl7e5aiz6j185v3uq7vts0rhws6na9u9bbhe1jlqdahymvmlx6uati3nh5f5ajy7a95onp89xpip2tgyen08xxgrznyk55du1rudbmtj7iarjzcl9vvm5qqp8p97r9cr5wetesgc3wr54du2okb9nke32yq2kciz531pgrsmac4fyrzlmmrs4ww4b9rcryfzko8m5mjs56ryb5jmmquoduwh8r65wbdm4vsvrcoito3x0os9v8c1xf9tuqjfu2dudh1z3rq2qlaidn2wcp70ggq5kywxavu8j9v4fandkk4pn7ziv30zsprv66durwxf47j9rraakmw7qkc84guyx8m1chc8pa757upqwsjwx3fpiemjgupo70t3j8k3c0ylm0jlj8mmj9zwprsok3nyuqjqqh5srqf4e6rbteem3r81mrd60j6zi26q3pnb8ysdi16ab2hyra50s0tow3t60ixttfntjntyhh4bfvxl7e6ry8yr33emy495ewhj1b2pg7yp3i2wc4wobwm9b70jvdg8z59o8iwgavtgb5s6m77kd6di51rfd44kb6zq30cpt4g8lh18cj87f7q6ig0z4x3u44ayfjkpo16ei0zwed3pw90zal2umsefa915fz15sf54hxgof705lo37zhrfhc9i58pk22v74qq7ch1qxs0cbw0xoywk1672d4hywtrrukt5pho1nvm3sqvkrot9l5qpwfyivenfjtkjs0n7hez8nrsmdmq8ykcovbbd4xlm98krmz8rolulzydjov82zb6e7jtjqvtth9xdzhda8rv3g0ilbsu6datqabl7j2uhet86yaej9lnagd1w0s9g4cl1r2r2syk1qwdjudepu7g7q9btjz43b41nosk5ctlj7a38xi1wnwm52nsm7ludhsuq4amzj67c62k8qlkjzr48buoa1p938u5h47ogjj2plqatwrtdgsym6rdov3lgy2xdi02hohola63gaa3er4syli9h3zmqu48gtchz40a985g18srcoulvxplnnxb4x00gqc7pgw008tgknih16dkx76hg == \2\7\d\9\7\n\0\r\v\q\u\k\2\j\a\i\u\f\t\i\v\t\r\0\2\y\1\5\2\z\b\b\w\7\x\4\j\f\9\j\l\d\m\o\k\f\l\w\8\j\0\t\i\e\7\5\n\b\f\v\h\e\o\2\c\w\s\9\7\3\r\9\d\h\n\l\p\a\u\7\s\b\c\d\5\1\h\o\1\i\h\p\w\j\b\g\g\p\l\8\j\4\7\v\4\x\6\2\9\t\0\k\5\4\o\2\d\s\m\8\v\q\5\u\g\e\t\y\g\r\9\t\7\v\r\n\w\e\9\h\q\c\t\x\6\f\w\f\3\g\2\n\p\9\p\k\d\g\g\u\h\k\4\7\a\h\r\0\k\p\1\w\r\w\9\y\e\b\v\t\7\c\c\3\l\g\6\f\i\u\u\h\t\j\s\d\y\g\r\q\a\k\u\p\8\g\6\3\g\c\s\s\6\x\q\e\9\b\w\n\7\o\q\a\l\r\4\9\j\i\s\7\b\s\a\q\6\2\v\g\j\s\8\n\w\u\0\2\p\m\9\r\v\l\f\h\1\1\l\e\0\h\g\q\k\l\t\f\7\3\o\y\e\o\0\p\j\u\s\u\w\q\p\a\0\e\5\0\t\e\x\d\e\r\0\x\6\c\8\i\w\o\c\6\0\t\q\s\5\u\d\g\6\y\c\m\f\x\7\0\7\l\q\d\f\0\6\z\g\u\0\0\n\6\c\p\w\k\2\4\1\s\n\p\0\l\6\8\s\0\a\k\9\q\m\e\t\q\9\k\5\6\l\p\n\f\z\1\v\8\d\t\k\1\w\w\0\y\3\j\i\3\2\t\n\r\z\c\n\a\a\h\7\3\h\j\l\i\r\7\c\o\x\w\9\m\n\2\w\p\7\g\8\6\2\4\r\1\d\i\9\3\b\c\n\v\8\j\t\3\v\s\6\n\v\b\y\7\7\b\2\t\d\6\m\3\p\y\3\w\o\k\m\v\p\9\o\g\h\7\1\n\n\5\6\z\n\1\c\c\j\t\h\6\j\w\4\v\m\c\s\w\3\h\2\5\u\9\p\5\r\0\0\d\0\s\w\g\m\f\j\w\5\v\6\i\b\g\u\e\8\p\a\5\o\g\l\z\l\4\e\3\5\z\t\e\e\x\s\p\9\r\e\g\z\7\4\0\i\g\b\u\a\a\7\4\m\z\s\b\6\z\3\t\4\a\w\x\o\2\j\v\8\7\l\6\g\i\8\q\i\n\1\q\x\d\y\g\a\8\o\a\t\d\2\m\x\o\7\j\k\3\4\8\w\t\c\z\m\r\1\i\3\a\g\j\j\8\m\d\e\t\u\z\5\k\n\6\i\u\u\u\8\j\y\y\x\v\7\w\r\0\f\1\4\1\k\h\h\6\i\i\u\y\b\y\4\k\9\7\b\j\3\s\j\s\w\g\i\f\a\p\8\c\4\5\e\a\r\t\l\5\w\q\h\9\z\a\b\x\4\0\v\y\8\f\a\x\c\i\q\6\k\j\f\3\m\6\1\e\p\h\x\d\6\m\9\8\e\i\3\p\q\o\9\f\t\x\j\7\j\j\0\4\1\5\2\h\9\y\z\g\y\2\t\4\u\d\u\a\e\o\6\e\p\1\3\9\b\x\c\e\r\1\c\z\l\r\1\f\6\u\z\0\e\i\8\t\o\s\6\3\q\u\g\y\m\9\z\z\6\i\m\x\j\3\i\6\r\r\s\a\o\1\q\b\1\v\0\q\8\5\l\e\q\g\e\l\l\p\5\h\z\f\g\7\h\1\f\9\v\q\f\p\d\3\6\n\w\s\f\p\m\n\a\i\2\8\c\j\p\g\p\u\5\c\z\w\z\j\x\q\9\l\3\m\6\n\g\6\a\3\0\4\5\2\4\w\g\7\d\z\7\3\f\r\r\8\t\q\2\5\8\p\1\1\e\b\y\4\p\d\a\o\f\z\5\n\3\4\6\t\z\t\5\n\4\e\d\8\6\6\8\n\1\k\k\u\j\1\c\c\x\r\n\o\j\e\s\5\y\1\f\i\m\q\j\r\q\n\e\2\m\e\d\r\r\e\8\h\r\g\5\v\t\8\z\m\6\0\j\w\2\h\u\o\z\t\z\u\5\j\w\a\c\y\f\7\m\b\3\y\x\s\t\r\h\z\0\z\x\w\x\7\i\w\1\0\e\8\q\1\k\h\i\6\b\w\s\m\3\z\l\z\h\a\7\2\i\a\n\3\d\l\l\r\g\q\m\n\p\c\c\g\t\8\v\t\p\z\p\4\j\9\1\0\q\w\v\p\y\p\b\q\z\6\x\6\c\a\j\d\s\v\o\f\m\t\q\k\3\v\5\d\9\t\2\5\3\s\y\h\f\l\4\y\8\6\l\q\m\9\h\q\s\8\l\o\k\q\4\y\w\w\2\7\p\9\p\b\k\d\1\c\o\c\q\v\2\1\n\e\z\z\7\7\k\f\4\4\o\u\7\w\u\i\8\n\f\a\3\t\a\a\k\b\j\1\q\q\n\e\p\l\u\9\6\u\l\v\c\5\t\u\x\k\m\6\t\c\x\u\s\w\y\e\6\8\p\4\d\0\4\y\s\v\g\8\o\9\6\h\0\v\t\e\q\c\a\g\g\j\f\j\z\1\o\j\y\v\o\5\e\b\k\c\9\n\k\w\b\z\i\5\v\g\r\a\5\d\2\u\6\a\s\0\7\n\x\c\u\l\d\o\h\3\p\z\c\t\s\x\3\b\w\m\m\v\7\x\n\6\e\b\u\8\2\b\u\b\4\e\o\4\v\q\p\f\5\1\1\k\u\k\0\9\0\k\o\2\5\n\u\4\1\f\n\q\n\6\x\3\2\0\1\z\g\o\x\y\q\k\1\b\4\j\a\q\x\2\h\6\m\3\c\u\9\r\o\n\o\2\i\b\p\1\6\q\n\d\w\z\c\q\o\i\e\h\v\5\x\n\o\s\h\8\p\h\p\y\4\v\m\h\m\g\x\z\a\m\v\t\y\o\5\w\s\q\g\2\4\c\w\q\b\8\o\n\l\3\4\o\8\l\3\l\x\8\8\r\j\2\7\6\2\w\n\0\k\0\p\g\s\l\1\f\4\m\i\l\l\g\c\3\w\7\r\h\g\a\d\y\t\w\b\w\k\i\s\a\e\z\n\a\k\f\u\j\0\t\e\7\8\h\n\4\l\q\d\7\y\u\z\y\x\c\h\9\a\f\2\l\j\a\e\x\7\7\t\2\7\x\c\j\9\8\d\9\l\6\h\d\s\k\3\m\z\m\d\8\i\r\1\8\d\1\u\a\8\t\5\r\e\6\a\6\v\7\0\h\e\w\4\x\r\i\f\1\z\s\3\m\d\v\j\4\z\4\c\h\w\w\q\o\b\p\3\z\x\k\v\w\5\z\s\0\9\v\0\5\m\g\4\o\w\x\x\u\a\0\r\t\6\b\v\y\z\l\k\9\x\e\m\r\5\a\o\1\4\j\u\d\r\s\v\u\r\f\w\9\w\l\p\y\a\5\7\4\4\m\4\y\8\t\0\r\q\8\0\n\d\0\0\t\c\j\g\z\4\p\9\0\a\q\4\b\z\m\v\t\a\i\f\q\u\6\n\1\f\n\z\j\c\4\k\w\a\b\p\k\a\x\0\h\b\z\q\s\t\u\t\6\c\u\4\o\j\l\l\c\b\5\7\t\j\a\9\t\n\c\6\k\f\f\1\z\z\m\i\r\p\9\t\h\f\1\v\3\r\h\1\q\b\l\u\f\d\h\e\p\3\4\4\g\t\x\e\k\m\0\e\t\5\a\v\w\m\n\h\c\9\2\y\s\i\z\k\p\e\0\z\s\d\g\l\u\y\6\s\0\b\z\d\v\h\2\9\f\m\b\c\4\7\z\5\v\r\3\p\2\k\f\g\m\0\h\1\j\t\3\u\2\v\x\z\x\q\v\0\u\n\6\m\9\0\w\t\k\4\v\0\g\m\i\o\e\n\9\8\k\a\x\3\s\m\4\r\u\w\1\d\c\f\g\z\k\8\a\h\i\u\j\p\s\8\0\a\f\u\q\y\h\j\8\f\s\7\8\f\a\6\8\i\t\3\u\t\3\d\a\7\0\s\g\p\4\4\x\k\e\r\t\x\j\y\w\9\t\c\2\f\b\3\h\a\v\3\h\i\a\a\n\i\p\j\v\9\o\1\m\6\9\m\o\g\0\1\6\c\4\q\o\a\u\u\7\p\e\2\6\l\n\m\s\l\c\7\8\6\e\6\a\n\y\e\9\c\x\o\z\7\3\s\d\u\4\g\a\h\4\o\5\8\z\l\p\r\g\k\4\2\w\c\e\4\u\v\d\7\s\5\2\z\i\8\3\v\b\v\r\5\y\8\m\t\p\6\v\7\b\5\s\m\l\w\w\w\8\x\g\r\z\2\i\a\u\y\j\6\i\n\1\o\0\j\w\g\d\k\q\d\8\1\o\8\5\e\z\l\a\j\t\g\2\5\i\z\n\y\1\u\1\z\w\n\9\2\n\2\j\i\a\g\d\h\w\1\8\2\3\d\z\u\o\b\9\5\v\v\g\a\x\m\k\a\4\y\w\m\4\y\m\9\k\0\v\g\m\l\j\i\b\h\c\g\1\w\i\6\j\t\9\i\k\m\s\p\2\t\r\a\0\3\c\a\d\k\i\w\b\j\f\l\j\n\q\p\p\l\6\r\9\b\r\b\b\n\y\g\x\q\k\p\a\j\l\w\h\g\0\9\7\w\c\i\w\y\2\y\e\r\5\0\a\q\o\l\4\5\x\o\y\j\o\b\a\4\f\l\n\j\5\e\b\i\r\3\k\7\8\m\n\m\v\z\m\c\t\a\1\r\w\q\s\l\h\y\q\3\l\1\l\y\8\g\i\4\l\u\4\h\a\0\f\q\d\g\z\r\f\6\n\l\1\3\q\y\u\a\m\i\4\l\a\p\4\5\c\t\d\d\5\c\w\v\5\c\j\e\b\1\l\5\n\8\n\f\k\c\8\d\s\u\v\8\3\k\5\u\4\l\r\h\4\x\y\2\y\l\e\2\x\h\m\e\n\6\j\6\x\8\a\4\h\6\c\t\h\m\n\w\t\8\q\1\h\q\a\v\6\f\z\q\3\3\q\b\k\p\x\8\x\i\w\9\x\k\a\v\5\u\7\f\a\o\q\t\8\h\e\u\0\7\2\3\d\k\w\s\e\5\3\0\1\4\l\3\3\x\e\v\f\v\6\2\x\b\0\8\0\l\f\e\q\x\1\s\1\x\4\b\t\p\r\d\l\d\x\v\u\k\5\6\j\g\6\n\4\v\h\1\j\7\6\4\l\0\2\d\q\1\z\3\9\8\v\d\d\w\v\5\g\1\b\l\i\e\c\5\z\x\1\k\k\f\n\a\q\d\h\y\h\u\2\2\o\4\x\4\r\1\f\t\d\b\5\q\v\z\9\1\y\q\6\2\z\u\f\w\r\j\9\x\l\g\p\w\o\w\4\s\s\f\a\t\i\r\s\9\1\1\b\c\1\e\c\t\b\k\j\0\x\k\1\r\m\l\o\b\f\s\2\b\5\6\d\a\3\8\x\d\d\7\t\u\d\8\g\m\b\b\u\r\v\2\8\4\b\4\m\m\9\2\e\t\z\v\1\v\9\2\x\g\6\9\r\b\p\j\c\q\b\j\a\4\g\d\d\v\0\q\g\i\x\t\5\f\5\5\1\e\i\c\e\v\l\x\0\n\c\f\8\s\6\b\c\8\y\9\f\b\6\e\x\l\d\6\y\b\u\a\c\o\2\h\r\8\e\l\x\u\4\j\x\4\d\8\0\w\4\o\b\c\s\u\a\8\h\x\s\z\m\w\f\v\4\6\o\d\z\g\u\p\6\y\v\k\2\2\a\3\v\2\a\a\k\d\7\r\q\x\7\v\u\1\m\7\b\q\a\9\8\g\q\q\1\6\m\2\0\k\g\u\1\d\l\0\0\i\8\m\5\r\u\a\4\o\4\o\z\4\0\6\r\z\9\0\h\f\8\t\4\q\d\a\1\2\u\6\i\q\m\g\x\3\o\c\i\1\p\q\d\v\t\p\o\c\4\l\1\n\u\s\4\x\6\c\b\g\k\o\5\i\u\c\2\q\a\0\0\j\8\9\t\6\n\e\d\b\9\y\s\k\b\7\x\s\6\y\l\s\c\x\t\2\p\e\d\n\n\0\g\x\v\n\9\3\d\3\v\d\8\l\3\z\g\g\k\a\0\4\l\m\l\4\h\b\g\i\x\h\z\f\2\v\n\1\0\1\n\l\h\3\9\g\0\5\s\d\1\s\m\4\i\u\1\7\d\g\g\1\i\p\j\7\6\1\w\i\n\u\s\g\x\c\o\b\h\f\7\3\i\0\8\x\m\0\0\s\o\0\4\g\k\a\4\k\h\u\j\x\i\i\j\c\b\z\y\y\i\a\f\x\t\z\3\3\i\p\3\h\u\a\2\e\8\n\m\h\0\n\8\q\8\i\j\z\w\7\s\l\j\9\t\0\6\x\4\a\1\v\g\3\s\g\l\k\b\t\3\m\a\o\t\m\x\7\p\2\v\z\v\n\8\x\q\p\7\7\o\v\q\8\n\k\d\a\w\q\r\4\5\n\w\3\8\f\k\q\i\n\z\h\0\c\l\7\e\5\a\i\z\6\j\1\8\5\v\3\u\q\7\v\t\s\0\r\h\w\s\6\n\a\9\u\9\b\b\h\e\1\j\l\q\d\a\h\y\m\v\m\l\x\6\u\a\t\i\3\n\h\5\f\5\a\j\y\7\a\9\5\o\n\p\8\9\x\p\i\p\2\t\g\y\e\n\0\8\x\x\g\r\z\n\y\k\5\5\d\u\1\r\u\d\b\m\t\j\7\i\a\r\j\z\c\l\9\v\v\m\5\q\q\p\8\p\9\7\r\9\c\r\5\w\e\t\e\s\g\c\3\w\r\5\4\d\u\2\o\k\b\9\n\k\e\3\2\y\q\2\k\c\i\z\5\3\1\p\g\r\s\m\a\c\4\f\y\r\z\l\m\m\r\s\4\w\w\4\b\9\r\c\r\y\f\z\k\o\8\m\5\m\j\s\5\6\r\y\b\5\j\m\m\q\u\o\d\u\w\h\8\r\6\5\w\b\d\m\4\v\s\v\r\c\o\i\t\o\3\x\0\o\s\9\v\8\c\1\x\f\9\t\u\q\j\f\u\2\d\u\d\h\1\z\3\r\q\2\q\l\a\i\d\n\2\w\c\p\7\0\g\g\q\5\k\y\w\x\a\v\u\8\j\9\v\4\f\a\n\d\k\k\4\p\n\7\z\i\v\3\0\z\s\p\r\v\6\6\d\u\r\w\x\f\4\7\j\9\r\r\a\a\k\m\w\7\q\k\c\8\4\g\u\y\x\8\m\1\c\h\c\8\p\a\7\5\7\u\p\q\w\s\j\w\x\3\f\p\i\e\m\j\g\u\p\o\7\0\t\3\j\8\k\3\c\0\y\l\m\0\j\l\j\8\m\m\j\9\z\w\p\r\s\o\k\3\n\y\u\q\j\q\q\h\5\s\r\q\f\4\e\6\r\b\t\e\e\m\3\r\8\1\m\r\d\6\0\j\6\z\i\2\6\q\3\p\n\b\8\y\s\d\i\1\6\a\b\2\h\y\r\a\5\0\s\0\t\o\w\3\t\6\0\i\x\t\t\f\n\t\j\n\t\y\h\h\4\b\f\v\x\l\7\e\6\r\y\8\y\r\3\3\e\m\y\4\9\5\e\w\h\j\1\b\2\p\g\7\y\p\3\i\2\w\c\4\w\o\b\w\m\9\b\7\0\j\v\d\g\8\z\5\9\o\8\i\w\g\a\v\t\g\b\5\s\6\m\7\7\k\d\6\d\i\5\1\r\f\d\4\4\k\b\6\z\q\3\0\c\p\t\4\g\8\l\h\1\8\c\j\8\7\f\7\q\6\i\g\0\z\4\x\3\u\4\4\a\y\f\j\k\p\o\1\6\e\i\0\z\w\e\d\3\p\w\9\0\z\a\l\2\u\m\s\e\f\a\9\1\5\f\z\1\5\s\f\5\4\h\x\g\o\f\7\0\5\l\o\3\7\z\h\r\f\h\c\9\i\5\8\p\k\2\2\v\7\4\q\q\7\c\h\1\q\x\s\0\c\b\w\0\x\o\y\w\k\1\6\7\2\d\4\h\y\w\t\r\r\u\k\t\5\p\h\o\1\n\v\m\3\s\q\v\k\r\o\t\9\l\5\q\p\w\f\y\i\v\e\n\f\j\t\k\j\s\0\n\7\h\e\z\8\n\r\s\m\d\m\q\8\y\k\c\o\v\b\b\d\4\x\l\m\9\8\k\r\m\z\8\r\o\l\u\l\z\y\d\j\o\v\8\2\z\b\6\e\7\j\t\j\q\v\t\t\h\9\x\d\z\h\d\a\8\r\v\3\g\0\i\l\b\s\u\6\d\a\t\q\a\b\l\7\j\2\u\h\e\t\8\6\y\a\e\j\9\l\n\a\g\d\1\w\0\s\9\g\4\c\l\1\r\2\r\2\s\y\k\1\q\w\d\j\u\d\e\p\u\7\g\7\q\9\b\t\j\z\4\3\b\4\1\n\o\s\k\5\c\t\l\j\7\a\3\8\x\i\1\w\n\w\m\5\2\n\s\m\7\l\u\d\h\s\u\q\4\a\m\z\j\6\7\c\6\2\k\8\q\l\k\j\z\r\4\8\b\u\o\a\1\p\9\3\8\u\5\h\4\7\o\g\j\j\2\p\l\q\a\t\w\r\t\d\g\s\y\m\6\r\d\o\v\3\l\g\y\2\x\d\i\0\2\h\o\h\o\l\a\6\3\g\a\a\3\e\r\4\s\y\l\i\9\h\3\z\m\q\u\4\8\g\t\c\h\z\4\0\a\9\8\5\g\1\8\s\r\c\o\u\l\v\x\p\l\n\n\x\b\4\x\0\0\g\q\c\7\p\g\w\0\0\8\t\g\k\n\i\h\1\6\d\k\x\7\6\h\g ]] 00:06:44.122 00:06:44.122 real 0m1.754s 00:06:44.122 user 0m1.224s 00:06:44.122 sys 0m0.816s 00:06:44.122 22:17:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.122 22:17:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:44.122 ************************************ 00:06:44.122 END TEST dd_rw_offset 00:06:44.122 ************************************ 00:06:44.381 22:17:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:44.381 22:17:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:44.381 22:17:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:44.381 22:17:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:44.381 22:17:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:44.381 22:17:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:44.381 22:17:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:44.381 22:17:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:44.381 22:17:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:44.381 22:17:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:44.381 22:17:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:44.381 22:17:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:44.381 [2024-07-15 22:17:57.820199] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:44.381 [2024-07-15 22:17:57.820277] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63076 ] 00:06:44.381 { 00:06:44.381 "subsystems": [ 00:06:44.381 { 00:06:44.381 "subsystem": "bdev", 00:06:44.381 "config": [ 00:06:44.381 { 00:06:44.381 "params": { 00:06:44.381 "trtype": "pcie", 00:06:44.381 "traddr": "0000:00:10.0", 00:06:44.381 "name": "Nvme0" 00:06:44.381 }, 00:06:44.381 "method": "bdev_nvme_attach_controller" 00:06:44.381 }, 00:06:44.381 { 00:06:44.381 "method": "bdev_wait_for_examine" 00:06:44.381 } 00:06:44.381 ] 00:06:44.381 } 00:06:44.381 ] 00:06:44.381 } 00:06:44.381 [2024-07-15 22:17:57.962001] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.639 [2024-07-15 22:17:58.114587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.639 [2024-07-15 22:17:58.190306] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:45.157  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:45.157 00:06:45.157 22:17:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:45.157 00:06:45.157 real 0m22.109s 00:06:45.157 user 0m15.549s 00:06:45.157 sys 0m9.158s 00:06:45.157 22:17:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.157 22:17:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:45.157 ************************************ 00:06:45.157 END TEST spdk_dd_basic_rw 00:06:45.157 ************************************ 00:06:45.157 22:17:58 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:45.157 22:17:58 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:45.157 22:17:58 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.157 22:17:58 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.157 22:17:58 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:45.157 ************************************ 00:06:45.157 START TEST spdk_dd_posix 00:06:45.157 ************************************ 00:06:45.157 22:17:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:45.417 * Looking for test storage... 00:06:45.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:45.417 * First test run, liburing in use 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:45.417 ************************************ 00:06:45.417 START TEST dd_flag_append 00:06:45.417 ************************************ 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=txuk0u18txksxuvhcuzvkn5ft5j0w36b 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=iwa364d1n4zftlyknz5bzcgvly7d1r6k 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s txuk0u18txksxuvhcuzvkn5ft5j0w36b 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s iwa364d1n4zftlyknz5bzcgvly7d1r6k 00:06:45.417 22:17:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:45.417 [2024-07-15 22:17:58.904446] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:45.417 [2024-07-15 22:17:58.904527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63146 ] 00:06:45.417 [2024-07-15 22:17:59.032402] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.676 [2024-07-15 22:17:59.183889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.676 [2024-07-15 22:17:59.262854] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:46.194  Copying: 32/32 [B] (average 31 kBps) 00:06:46.194 00:06:46.194 ************************************ 00:06:46.194 END TEST dd_flag_append 00:06:46.194 ************************************ 00:06:46.194 22:17:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ iwa364d1n4zftlyknz5bzcgvly7d1r6ktxuk0u18txksxuvhcuzvkn5ft5j0w36b == \i\w\a\3\6\4\d\1\n\4\z\f\t\l\y\k\n\z\5\b\z\c\g\v\l\y\7\d\1\r\6\k\t\x\u\k\0\u\1\8\t\x\k\s\x\u\v\h\c\u\z\v\k\n\5\f\t\5\j\0\w\3\6\b ]] 00:06:46.194 00:06:46.194 real 0m0.784s 00:06:46.194 user 0m0.470s 00:06:46.194 sys 0m0.385s 00:06:46.194 22:17:59 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.194 22:17:59 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:46.194 22:17:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:46.194 22:17:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:46.194 22:17:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.194 22:17:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.194 22:17:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:46.194 ************************************ 00:06:46.194 START TEST dd_flag_directory 00:06:46.194 ************************************ 00:06:46.194 22:17:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:06:46.194 22:17:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:46.194 22:17:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:06:46.194 22:17:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:46.194 22:17:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.194 22:17:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.194 22:17:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.194 22:17:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.194 22:17:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.194 22:17:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.194 22:17:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.194 22:17:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:46.194 22:17:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:46.194 [2024-07-15 22:17:59.755611] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:46.194 [2024-07-15 22:17:59.755693] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63174 ] 00:06:46.452 [2024-07-15 22:17:59.898587] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.452 [2024-07-15 22:18:00.052249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.710 [2024-07-15 22:18:00.126167] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:46.710 [2024-07-15 22:18:00.172564] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:46.710 [2024-07-15 22:18:00.172646] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:46.710 [2024-07-15 22:18:00.172661] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.968 [2024-07-15 22:18:00.347539] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:46.968 22:18:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:06:46.968 22:18:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:46.968 22:18:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:06:46.968 22:18:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:06:46.968 22:18:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:06:46.968 22:18:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:46.968 22:18:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:46.968 22:18:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:06:46.968 22:18:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:46.968 22:18:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.968 22:18:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.968 22:18:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.968 22:18:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.968 22:18:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.968 22:18:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.969 22:18:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.969 22:18:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:46.969 22:18:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:46.969 [2024-07-15 22:18:00.537098] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:46.969 [2024-07-15 22:18:00.537207] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63189 ] 00:06:47.227 [2024-07-15 22:18:00.688406] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.227 [2024-07-15 22:18:00.840264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.486 [2024-07-15 22:18:00.914109] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:47.486 [2024-07-15 22:18:00.961396] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:47.486 [2024-07-15 22:18:00.961468] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:47.486 [2024-07-15 22:18:00.961482] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:47.744 [2024-07-15 22:18:01.128675] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:47.744 22:18:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:06:47.744 22:18:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:47.744 22:18:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:06:47.744 22:18:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:06:47.744 22:18:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:06:47.744 22:18:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:47.744 00:06:47.744 real 0m1.564s 00:06:47.744 user 0m0.951s 00:06:47.744 sys 0m0.403s 00:06:47.744 ************************************ 00:06:47.744 END TEST dd_flag_directory 00:06:47.744 ************************************ 00:06:47.744 22:18:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.744 22:18:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:47.744 22:18:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:47.744 22:18:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:47.744 22:18:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.744 22:18:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.744 22:18:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:47.744 ************************************ 00:06:47.744 START TEST dd_flag_nofollow 00:06:47.744 ************************************ 00:06:47.744 22:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:06:47.744 22:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:47.744 22:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:47.744 22:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:47.744 22:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:47.744 22:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:47.744 22:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:06:47.744 22:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:47.744 22:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.744 22:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.745 22:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.745 22:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.745 22:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.745 22:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.745 22:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.745 22:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:47.745 22:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:48.003 [2024-07-15 22:18:01.406231] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:48.003 [2024-07-15 22:18:01.406338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63218 ] 00:06:48.003 [2024-07-15 22:18:01.542548] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.261 [2024-07-15 22:18:01.700830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.261 [2024-07-15 22:18:01.780314] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.261 [2024-07-15 22:18:01.828250] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:48.261 [2024-07-15 22:18:01.828320] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:48.261 [2024-07-15 22:18:01.828337] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.520 [2024-07-15 22:18:02.003327] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:48.520 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:06:48.520 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.520 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:06:48.520 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:06:48.520 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:06:48.520 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.520 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:48.520 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:06:48.520 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:48.520 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.520 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.520 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.520 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.520 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.520 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.520 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.520 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:48.520 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:48.779 [2024-07-15 22:18:02.202429] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:48.779 [2024-07-15 22:18:02.202544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63233 ] 00:06:48.779 [2024-07-15 22:18:02.347330] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.038 [2024-07-15 22:18:02.503182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.038 [2024-07-15 22:18:02.580105] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:49.038 [2024-07-15 22:18:02.627325] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:49.038 [2024-07-15 22:18:02.627391] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:49.038 [2024-07-15 22:18:02.627406] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:49.296 [2024-07-15 22:18:02.799044] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:49.554 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:06:49.554 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:49.554 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:06:49.554 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:06:49.554 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:06:49.554 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:49.554 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:49.554 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:49.554 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:49.554 22:18:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:49.554 [2024-07-15 22:18:02.995780] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:49.554 [2024-07-15 22:18:02.995864] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63246 ] 00:06:49.554 [2024-07-15 22:18:03.140746] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.812 [2024-07-15 22:18:03.306641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.812 [2024-07-15 22:18:03.388338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.378  Copying: 512/512 [B] (average 500 kBps) 00:06:50.378 00:06:50.378 ************************************ 00:06:50.378 END TEST dd_flag_nofollow 00:06:50.378 ************************************ 00:06:50.378 22:18:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ xlcalcgx4px5cr16qa9u2bcy82daad8vi1wxwk6gzgu4arli9kgb51wohgywxcx1y2suutvmmno9gm0tng3ged5118qbdrik39rnx4zrzpilm8d6xasnt2tztnnnsj38qkxdnwy85jdli9e43ji1jxgaxm6wf5356hkkishlda4yaeelxorly1be5zay5g059n2o14ghwoamjq6bwdehdshjbaf4r25lvq1celfx60f2n7ftz8euejkm5856x2u82c4fo2ayjdnsr1wp58koa6flbduxqmgvvolcoy03wt0ya20soa63ql20v0lt7oma04nom2gs7tnngdcfnmq08ivrtnlqy0qr2r07llh5nvrc0dhd28ufapyyjzgwuesb63umio2o9b43w88cghy9244plw7qxxk88w9v50mno5l88l1d1i2abwx73os2icet1u1zec4yhkb5ppux2yuextqpi43ujdx2a38r8knmpfzt3jtks3z5sm9ebcdbki3l == \x\l\c\a\l\c\g\x\4\p\x\5\c\r\1\6\q\a\9\u\2\b\c\y\8\2\d\a\a\d\8\v\i\1\w\x\w\k\6\g\z\g\u\4\a\r\l\i\9\k\g\b\5\1\w\o\h\g\y\w\x\c\x\1\y\2\s\u\u\t\v\m\m\n\o\9\g\m\0\t\n\g\3\g\e\d\5\1\1\8\q\b\d\r\i\k\3\9\r\n\x\4\z\r\z\p\i\l\m\8\d\6\x\a\s\n\t\2\t\z\t\n\n\n\s\j\3\8\q\k\x\d\n\w\y\8\5\j\d\l\i\9\e\4\3\j\i\1\j\x\g\a\x\m\6\w\f\5\3\5\6\h\k\k\i\s\h\l\d\a\4\y\a\e\e\l\x\o\r\l\y\1\b\e\5\z\a\y\5\g\0\5\9\n\2\o\1\4\g\h\w\o\a\m\j\q\6\b\w\d\e\h\d\s\h\j\b\a\f\4\r\2\5\l\v\q\1\c\e\l\f\x\6\0\f\2\n\7\f\t\z\8\e\u\e\j\k\m\5\8\5\6\x\2\u\8\2\c\4\f\o\2\a\y\j\d\n\s\r\1\w\p\5\8\k\o\a\6\f\l\b\d\u\x\q\m\g\v\v\o\l\c\o\y\0\3\w\t\0\y\a\2\0\s\o\a\6\3\q\l\2\0\v\0\l\t\7\o\m\a\0\4\n\o\m\2\g\s\7\t\n\n\g\d\c\f\n\m\q\0\8\i\v\r\t\n\l\q\y\0\q\r\2\r\0\7\l\l\h\5\n\v\r\c\0\d\h\d\2\8\u\f\a\p\y\y\j\z\g\w\u\e\s\b\6\3\u\m\i\o\2\o\9\b\4\3\w\8\8\c\g\h\y\9\2\4\4\p\l\w\7\q\x\x\k\8\8\w\9\v\5\0\m\n\o\5\l\8\8\l\1\d\1\i\2\a\b\w\x\7\3\o\s\2\i\c\e\t\1\u\1\z\e\c\4\y\h\k\b\5\p\p\u\x\2\y\u\e\x\t\q\p\i\4\3\u\j\d\x\2\a\3\8\r\8\k\n\m\p\f\z\t\3\j\t\k\s\3\z\5\s\m\9\e\b\c\d\b\k\i\3\l ]] 00:06:50.378 00:06:50.378 real 0m2.395s 00:06:50.378 user 0m1.436s 00:06:50.378 sys 0m0.818s 00:06:50.378 22:18:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.378 22:18:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:50.378 22:18:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:50.378 22:18:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:50.378 22:18:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:50.378 22:18:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.378 22:18:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:50.378 ************************************ 00:06:50.378 START TEST dd_flag_noatime 00:06:50.378 ************************************ 00:06:50.378 22:18:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:06:50.378 22:18:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:50.378 22:18:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:50.378 22:18:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:50.378 22:18:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:50.378 22:18:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:50.378 22:18:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:50.378 22:18:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721081883 00:06:50.378 22:18:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:50.378 22:18:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721081883 00:06:50.378 22:18:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:51.312 22:18:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:51.312 [2024-07-15 22:18:04.892425] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:51.312 [2024-07-15 22:18:04.892524] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63288 ] 00:06:51.570 [2024-07-15 22:18:05.035631] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.570 [2024-07-15 22:18:05.184330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.886 [2024-07-15 22:18:05.257348] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:52.145  Copying: 512/512 [B] (average 500 kBps) 00:06:52.145 00:06:52.145 22:18:05 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:52.145 22:18:05 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721081883 )) 00:06:52.145 22:18:05 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:52.145 22:18:05 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721081883 )) 00:06:52.145 22:18:05 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:52.145 [2024-07-15 22:18:05.684467] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:52.145 [2024-07-15 22:18:05.684564] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63302 ] 00:06:52.404 [2024-07-15 22:18:05.829685] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.404 [2024-07-15 22:18:05.981031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.663 [2024-07-15 22:18:06.054875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:52.922  Copying: 512/512 [B] (average 500 kBps) 00:06:52.922 00:06:52.922 22:18:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:52.922 ************************************ 00:06:52.922 END TEST dd_flag_noatime 00:06:52.922 ************************************ 00:06:52.922 22:18:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721081886 )) 00:06:52.922 00:06:52.922 real 0m2.603s 00:06:52.922 user 0m0.959s 00:06:52.922 sys 0m0.778s 00:06:52.922 22:18:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.922 22:18:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:52.922 22:18:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:52.922 22:18:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:52.922 22:18:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.922 22:18:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.922 22:18:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:52.922 ************************************ 00:06:52.922 START TEST dd_flags_misc 00:06:52.922 ************************************ 00:06:52.922 22:18:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:06:52.922 22:18:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:52.922 22:18:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:52.922 22:18:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:52.922 22:18:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:52.922 22:18:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:52.922 22:18:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:52.922 22:18:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:52.922 22:18:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:52.922 22:18:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:52.922 [2024-07-15 22:18:06.546992] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:52.922 [2024-07-15 22:18:06.547288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63336 ] 00:06:53.180 [2024-07-15 22:18:06.685698] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.439 [2024-07-15 22:18:06.836400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.439 [2024-07-15 22:18:06.910300] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:53.697  Copying: 512/512 [B] (average 500 kBps) 00:06:53.697 00:06:53.697 22:18:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ie5axm13bpw3tmmecns5oq4x4gz0hov3lp0ery2qqlfgftv1gm7c4625tnmlmisjia18nxj3mu5lp1nl2b1ijj40bumzhynuuqelukh6ljynrhloco0ewlxf9xmwloasurszuy1b0gz0idcclplre5ftpynr8a6prqmizlikj8yszyya5eqwbj1nx3mxc3s7yk28u3vty07lbwtlupvg999fj3hdq8ii8l2pxeqfocoupo586gs7ygj5lgxcgla7bmf2cydcy022huz4jzpkr2u2xyzr2exh54b1nfnlz1bjjrsbajmcfxn0h06rvqe0pfpqw9ajlao81iep68niopidsjwks3qd5psdquta1iojvze8ylfw1k4jyt3mj5j6ov9nbt2zgg8qcyqryi7w5z4wpybc1r27dz3wg21xybhhwsa7il577txa2a74i02w7bqb0qnsbjxaplrihz2xeoetxcivjrqywtdaq8u4x8kpkkrkkwxjcth64a5ilyew == \i\e\5\a\x\m\1\3\b\p\w\3\t\m\m\e\c\n\s\5\o\q\4\x\4\g\z\0\h\o\v\3\l\p\0\e\r\y\2\q\q\l\f\g\f\t\v\1\g\m\7\c\4\6\2\5\t\n\m\l\m\i\s\j\i\a\1\8\n\x\j\3\m\u\5\l\p\1\n\l\2\b\1\i\j\j\4\0\b\u\m\z\h\y\n\u\u\q\e\l\u\k\h\6\l\j\y\n\r\h\l\o\c\o\0\e\w\l\x\f\9\x\m\w\l\o\a\s\u\r\s\z\u\y\1\b\0\g\z\0\i\d\c\c\l\p\l\r\e\5\f\t\p\y\n\r\8\a\6\p\r\q\m\i\z\l\i\k\j\8\y\s\z\y\y\a\5\e\q\w\b\j\1\n\x\3\m\x\c\3\s\7\y\k\2\8\u\3\v\t\y\0\7\l\b\w\t\l\u\p\v\g\9\9\9\f\j\3\h\d\q\8\i\i\8\l\2\p\x\e\q\f\o\c\o\u\p\o\5\8\6\g\s\7\y\g\j\5\l\g\x\c\g\l\a\7\b\m\f\2\c\y\d\c\y\0\2\2\h\u\z\4\j\z\p\k\r\2\u\2\x\y\z\r\2\e\x\h\5\4\b\1\n\f\n\l\z\1\b\j\j\r\s\b\a\j\m\c\f\x\n\0\h\0\6\r\v\q\e\0\p\f\p\q\w\9\a\j\l\a\o\8\1\i\e\p\6\8\n\i\o\p\i\d\s\j\w\k\s\3\q\d\5\p\s\d\q\u\t\a\1\i\o\j\v\z\e\8\y\l\f\w\1\k\4\j\y\t\3\m\j\5\j\6\o\v\9\n\b\t\2\z\g\g\8\q\c\y\q\r\y\i\7\w\5\z\4\w\p\y\b\c\1\r\2\7\d\z\3\w\g\2\1\x\y\b\h\h\w\s\a\7\i\l\5\7\7\t\x\a\2\a\7\4\i\0\2\w\7\b\q\b\0\q\n\s\b\j\x\a\p\l\r\i\h\z\2\x\e\o\e\t\x\c\i\v\j\r\q\y\w\t\d\a\q\8\u\4\x\8\k\p\k\k\r\k\k\w\x\j\c\t\h\6\4\a\5\i\l\y\e\w ]] 00:06:53.697 22:18:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:53.697 22:18:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:53.697 [2024-07-15 22:18:07.303494] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:53.697 [2024-07-15 22:18:07.303581] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63351 ] 00:06:53.956 [2024-07-15 22:18:07.449133] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.214 [2024-07-15 22:18:07.599998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.214 [2024-07-15 22:18:07.673649] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:54.473  Copying: 512/512 [B] (average 500 kBps) 00:06:54.473 00:06:54.473 22:18:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ie5axm13bpw3tmmecns5oq4x4gz0hov3lp0ery2qqlfgftv1gm7c4625tnmlmisjia18nxj3mu5lp1nl2b1ijj40bumzhynuuqelukh6ljynrhloco0ewlxf9xmwloasurszuy1b0gz0idcclplre5ftpynr8a6prqmizlikj8yszyya5eqwbj1nx3mxc3s7yk28u3vty07lbwtlupvg999fj3hdq8ii8l2pxeqfocoupo586gs7ygj5lgxcgla7bmf2cydcy022huz4jzpkr2u2xyzr2exh54b1nfnlz1bjjrsbajmcfxn0h06rvqe0pfpqw9ajlao81iep68niopidsjwks3qd5psdquta1iojvze8ylfw1k4jyt3mj5j6ov9nbt2zgg8qcyqryi7w5z4wpybc1r27dz3wg21xybhhwsa7il577txa2a74i02w7bqb0qnsbjxaplrihz2xeoetxcivjrqywtdaq8u4x8kpkkrkkwxjcth64a5ilyew == \i\e\5\a\x\m\1\3\b\p\w\3\t\m\m\e\c\n\s\5\o\q\4\x\4\g\z\0\h\o\v\3\l\p\0\e\r\y\2\q\q\l\f\g\f\t\v\1\g\m\7\c\4\6\2\5\t\n\m\l\m\i\s\j\i\a\1\8\n\x\j\3\m\u\5\l\p\1\n\l\2\b\1\i\j\j\4\0\b\u\m\z\h\y\n\u\u\q\e\l\u\k\h\6\l\j\y\n\r\h\l\o\c\o\0\e\w\l\x\f\9\x\m\w\l\o\a\s\u\r\s\z\u\y\1\b\0\g\z\0\i\d\c\c\l\p\l\r\e\5\f\t\p\y\n\r\8\a\6\p\r\q\m\i\z\l\i\k\j\8\y\s\z\y\y\a\5\e\q\w\b\j\1\n\x\3\m\x\c\3\s\7\y\k\2\8\u\3\v\t\y\0\7\l\b\w\t\l\u\p\v\g\9\9\9\f\j\3\h\d\q\8\i\i\8\l\2\p\x\e\q\f\o\c\o\u\p\o\5\8\6\g\s\7\y\g\j\5\l\g\x\c\g\l\a\7\b\m\f\2\c\y\d\c\y\0\2\2\h\u\z\4\j\z\p\k\r\2\u\2\x\y\z\r\2\e\x\h\5\4\b\1\n\f\n\l\z\1\b\j\j\r\s\b\a\j\m\c\f\x\n\0\h\0\6\r\v\q\e\0\p\f\p\q\w\9\a\j\l\a\o\8\1\i\e\p\6\8\n\i\o\p\i\d\s\j\w\k\s\3\q\d\5\p\s\d\q\u\t\a\1\i\o\j\v\z\e\8\y\l\f\w\1\k\4\j\y\t\3\m\j\5\j\6\o\v\9\n\b\t\2\z\g\g\8\q\c\y\q\r\y\i\7\w\5\z\4\w\p\y\b\c\1\r\2\7\d\z\3\w\g\2\1\x\y\b\h\h\w\s\a\7\i\l\5\7\7\t\x\a\2\a\7\4\i\0\2\w\7\b\q\b\0\q\n\s\b\j\x\a\p\l\r\i\h\z\2\x\e\o\e\t\x\c\i\v\j\r\q\y\w\t\d\a\q\8\u\4\x\8\k\p\k\k\r\k\k\w\x\j\c\t\h\6\4\a\5\i\l\y\e\w ]] 00:06:54.473 22:18:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:54.473 22:18:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:54.473 [2024-07-15 22:18:08.077800] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:54.473 [2024-07-15 22:18:08.077873] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63360 ] 00:06:54.761 [2024-07-15 22:18:08.222205] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.761 [2024-07-15 22:18:08.372814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.020 [2024-07-15 22:18:08.446626] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.279  Copying: 512/512 [B] (average 166 kBps) 00:06:55.279 00:06:55.279 22:18:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ie5axm13bpw3tmmecns5oq4x4gz0hov3lp0ery2qqlfgftv1gm7c4625tnmlmisjia18nxj3mu5lp1nl2b1ijj40bumzhynuuqelukh6ljynrhloco0ewlxf9xmwloasurszuy1b0gz0idcclplre5ftpynr8a6prqmizlikj8yszyya5eqwbj1nx3mxc3s7yk28u3vty07lbwtlupvg999fj3hdq8ii8l2pxeqfocoupo586gs7ygj5lgxcgla7bmf2cydcy022huz4jzpkr2u2xyzr2exh54b1nfnlz1bjjrsbajmcfxn0h06rvqe0pfpqw9ajlao81iep68niopidsjwks3qd5psdquta1iojvze8ylfw1k4jyt3mj5j6ov9nbt2zgg8qcyqryi7w5z4wpybc1r27dz3wg21xybhhwsa7il577txa2a74i02w7bqb0qnsbjxaplrihz2xeoetxcivjrqywtdaq8u4x8kpkkrkkwxjcth64a5ilyew == \i\e\5\a\x\m\1\3\b\p\w\3\t\m\m\e\c\n\s\5\o\q\4\x\4\g\z\0\h\o\v\3\l\p\0\e\r\y\2\q\q\l\f\g\f\t\v\1\g\m\7\c\4\6\2\5\t\n\m\l\m\i\s\j\i\a\1\8\n\x\j\3\m\u\5\l\p\1\n\l\2\b\1\i\j\j\4\0\b\u\m\z\h\y\n\u\u\q\e\l\u\k\h\6\l\j\y\n\r\h\l\o\c\o\0\e\w\l\x\f\9\x\m\w\l\o\a\s\u\r\s\z\u\y\1\b\0\g\z\0\i\d\c\c\l\p\l\r\e\5\f\t\p\y\n\r\8\a\6\p\r\q\m\i\z\l\i\k\j\8\y\s\z\y\y\a\5\e\q\w\b\j\1\n\x\3\m\x\c\3\s\7\y\k\2\8\u\3\v\t\y\0\7\l\b\w\t\l\u\p\v\g\9\9\9\f\j\3\h\d\q\8\i\i\8\l\2\p\x\e\q\f\o\c\o\u\p\o\5\8\6\g\s\7\y\g\j\5\l\g\x\c\g\l\a\7\b\m\f\2\c\y\d\c\y\0\2\2\h\u\z\4\j\z\p\k\r\2\u\2\x\y\z\r\2\e\x\h\5\4\b\1\n\f\n\l\z\1\b\j\j\r\s\b\a\j\m\c\f\x\n\0\h\0\6\r\v\q\e\0\p\f\p\q\w\9\a\j\l\a\o\8\1\i\e\p\6\8\n\i\o\p\i\d\s\j\w\k\s\3\q\d\5\p\s\d\q\u\t\a\1\i\o\j\v\z\e\8\y\l\f\w\1\k\4\j\y\t\3\m\j\5\j\6\o\v\9\n\b\t\2\z\g\g\8\q\c\y\q\r\y\i\7\w\5\z\4\w\p\y\b\c\1\r\2\7\d\z\3\w\g\2\1\x\y\b\h\h\w\s\a\7\i\l\5\7\7\t\x\a\2\a\7\4\i\0\2\w\7\b\q\b\0\q\n\s\b\j\x\a\p\l\r\i\h\z\2\x\e\o\e\t\x\c\i\v\j\r\q\y\w\t\d\a\q\8\u\4\x\8\k\p\k\k\r\k\k\w\x\j\c\t\h\6\4\a\5\i\l\y\e\w ]] 00:06:55.279 22:18:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:55.279 22:18:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:55.279 [2024-07-15 22:18:08.840726] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:55.279 [2024-07-15 22:18:08.840829] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63370 ] 00:06:55.539 [2024-07-15 22:18:08.986056] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.539 [2024-07-15 22:18:09.131264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.797 [2024-07-15 22:18:09.204272] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:56.056  Copying: 512/512 [B] (average 250 kBps) 00:06:56.056 00:06:56.056 22:18:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ie5axm13bpw3tmmecns5oq4x4gz0hov3lp0ery2qqlfgftv1gm7c4625tnmlmisjia18nxj3mu5lp1nl2b1ijj40bumzhynuuqelukh6ljynrhloco0ewlxf9xmwloasurszuy1b0gz0idcclplre5ftpynr8a6prqmizlikj8yszyya5eqwbj1nx3mxc3s7yk28u3vty07lbwtlupvg999fj3hdq8ii8l2pxeqfocoupo586gs7ygj5lgxcgla7bmf2cydcy022huz4jzpkr2u2xyzr2exh54b1nfnlz1bjjrsbajmcfxn0h06rvqe0pfpqw9ajlao81iep68niopidsjwks3qd5psdquta1iojvze8ylfw1k4jyt3mj5j6ov9nbt2zgg8qcyqryi7w5z4wpybc1r27dz3wg21xybhhwsa7il577txa2a74i02w7bqb0qnsbjxaplrihz2xeoetxcivjrqywtdaq8u4x8kpkkrkkwxjcth64a5ilyew == \i\e\5\a\x\m\1\3\b\p\w\3\t\m\m\e\c\n\s\5\o\q\4\x\4\g\z\0\h\o\v\3\l\p\0\e\r\y\2\q\q\l\f\g\f\t\v\1\g\m\7\c\4\6\2\5\t\n\m\l\m\i\s\j\i\a\1\8\n\x\j\3\m\u\5\l\p\1\n\l\2\b\1\i\j\j\4\0\b\u\m\z\h\y\n\u\u\q\e\l\u\k\h\6\l\j\y\n\r\h\l\o\c\o\0\e\w\l\x\f\9\x\m\w\l\o\a\s\u\r\s\z\u\y\1\b\0\g\z\0\i\d\c\c\l\p\l\r\e\5\f\t\p\y\n\r\8\a\6\p\r\q\m\i\z\l\i\k\j\8\y\s\z\y\y\a\5\e\q\w\b\j\1\n\x\3\m\x\c\3\s\7\y\k\2\8\u\3\v\t\y\0\7\l\b\w\t\l\u\p\v\g\9\9\9\f\j\3\h\d\q\8\i\i\8\l\2\p\x\e\q\f\o\c\o\u\p\o\5\8\6\g\s\7\y\g\j\5\l\g\x\c\g\l\a\7\b\m\f\2\c\y\d\c\y\0\2\2\h\u\z\4\j\z\p\k\r\2\u\2\x\y\z\r\2\e\x\h\5\4\b\1\n\f\n\l\z\1\b\j\j\r\s\b\a\j\m\c\f\x\n\0\h\0\6\r\v\q\e\0\p\f\p\q\w\9\a\j\l\a\o\8\1\i\e\p\6\8\n\i\o\p\i\d\s\j\w\k\s\3\q\d\5\p\s\d\q\u\t\a\1\i\o\j\v\z\e\8\y\l\f\w\1\k\4\j\y\t\3\m\j\5\j\6\o\v\9\n\b\t\2\z\g\g\8\q\c\y\q\r\y\i\7\w\5\z\4\w\p\y\b\c\1\r\2\7\d\z\3\w\g\2\1\x\y\b\h\h\w\s\a\7\i\l\5\7\7\t\x\a\2\a\7\4\i\0\2\w\7\b\q\b\0\q\n\s\b\j\x\a\p\l\r\i\h\z\2\x\e\o\e\t\x\c\i\v\j\r\q\y\w\t\d\a\q\8\u\4\x\8\k\p\k\k\r\k\k\w\x\j\c\t\h\6\4\a\5\i\l\y\e\w ]] 00:06:56.056 22:18:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:56.056 22:18:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:56.056 22:18:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:56.056 22:18:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:56.056 22:18:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:56.056 22:18:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:56.056 [2024-07-15 22:18:09.613235] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:56.056 [2024-07-15 22:18:09.613320] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63385 ] 00:06:56.313 [2024-07-15 22:18:09.757450] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.313 [2024-07-15 22:18:09.906490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.571 [2024-07-15 22:18:09.979847] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:56.828  Copying: 512/512 [B] (average 500 kBps) 00:06:56.828 00:06:56.828 22:18:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qz7ojjmacvqel1gob84ud5bf25cxwmhl2bo9jo1cacjmm16nbdtqh283l92fc35frtp1xae1k5zw01s73luhli4dy6v5oi5jwhjb9m9twvvrc0og5otdf75vdfi3c50myd6ukg4y0x1gk2p7anagcq0ygbug7kr2gtc3a57nyppg00zl7hjn1eslp7anr58lnkhk9fvqbbvcv5kyhkyo5tl2mphl4b20y8jf980x7ivee9126uywosywm5icw5whatv5lb731qn3epdacy3c8imt9bxfzx4r472ld0leph1rz9od3g9aa73phs8iyl3pwsteypvdn6ik3qqxckaf6ws8ujkzr3sxi5qm7w5hwxepooxbfhqefr1rio2mc8boshic5h4nd3orbyg4iw5ybphe55h8ptdynqqpujwsvcqz1ff1ytox80ol9ge7dk4lovfu7hbzj472zrc3l26a5bk5qtoarcdeutebefei4x7gowyeifckgujotdsxdz8y == \q\z\7\o\j\j\m\a\c\v\q\e\l\1\g\o\b\8\4\u\d\5\b\f\2\5\c\x\w\m\h\l\2\b\o\9\j\o\1\c\a\c\j\m\m\1\6\n\b\d\t\q\h\2\8\3\l\9\2\f\c\3\5\f\r\t\p\1\x\a\e\1\k\5\z\w\0\1\s\7\3\l\u\h\l\i\4\d\y\6\v\5\o\i\5\j\w\h\j\b\9\m\9\t\w\v\v\r\c\0\o\g\5\o\t\d\f\7\5\v\d\f\i\3\c\5\0\m\y\d\6\u\k\g\4\y\0\x\1\g\k\2\p\7\a\n\a\g\c\q\0\y\g\b\u\g\7\k\r\2\g\t\c\3\a\5\7\n\y\p\p\g\0\0\z\l\7\h\j\n\1\e\s\l\p\7\a\n\r\5\8\l\n\k\h\k\9\f\v\q\b\b\v\c\v\5\k\y\h\k\y\o\5\t\l\2\m\p\h\l\4\b\2\0\y\8\j\f\9\8\0\x\7\i\v\e\e\9\1\2\6\u\y\w\o\s\y\w\m\5\i\c\w\5\w\h\a\t\v\5\l\b\7\3\1\q\n\3\e\p\d\a\c\y\3\c\8\i\m\t\9\b\x\f\z\x\4\r\4\7\2\l\d\0\l\e\p\h\1\r\z\9\o\d\3\g\9\a\a\7\3\p\h\s\8\i\y\l\3\p\w\s\t\e\y\p\v\d\n\6\i\k\3\q\q\x\c\k\a\f\6\w\s\8\u\j\k\z\r\3\s\x\i\5\q\m\7\w\5\h\w\x\e\p\o\o\x\b\f\h\q\e\f\r\1\r\i\o\2\m\c\8\b\o\s\h\i\c\5\h\4\n\d\3\o\r\b\y\g\4\i\w\5\y\b\p\h\e\5\5\h\8\p\t\d\y\n\q\q\p\u\j\w\s\v\c\q\z\1\f\f\1\y\t\o\x\8\0\o\l\9\g\e\7\d\k\4\l\o\v\f\u\7\h\b\z\j\4\7\2\z\r\c\3\l\2\6\a\5\b\k\5\q\t\o\a\r\c\d\e\u\t\e\b\e\f\e\i\4\x\7\g\o\w\y\e\i\f\c\k\g\u\j\o\t\d\s\x\d\z\8\y ]] 00:06:56.828 22:18:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:56.828 22:18:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:56.828 [2024-07-15 22:18:10.367980] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:56.828 [2024-07-15 22:18:10.368064] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63400 ] 00:06:57.086 [2024-07-15 22:18:10.511720] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.086 [2024-07-15 22:18:10.659962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.395 [2024-07-15 22:18:10.732690] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:57.653  Copying: 512/512 [B] (average 500 kBps) 00:06:57.653 00:06:57.653 22:18:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qz7ojjmacvqel1gob84ud5bf25cxwmhl2bo9jo1cacjmm16nbdtqh283l92fc35frtp1xae1k5zw01s73luhli4dy6v5oi5jwhjb9m9twvvrc0og5otdf75vdfi3c50myd6ukg4y0x1gk2p7anagcq0ygbug7kr2gtc3a57nyppg00zl7hjn1eslp7anr58lnkhk9fvqbbvcv5kyhkyo5tl2mphl4b20y8jf980x7ivee9126uywosywm5icw5whatv5lb731qn3epdacy3c8imt9bxfzx4r472ld0leph1rz9od3g9aa73phs8iyl3pwsteypvdn6ik3qqxckaf6ws8ujkzr3sxi5qm7w5hwxepooxbfhqefr1rio2mc8boshic5h4nd3orbyg4iw5ybphe55h8ptdynqqpujwsvcqz1ff1ytox80ol9ge7dk4lovfu7hbzj472zrc3l26a5bk5qtoarcdeutebefei4x7gowyeifckgujotdsxdz8y == \q\z\7\o\j\j\m\a\c\v\q\e\l\1\g\o\b\8\4\u\d\5\b\f\2\5\c\x\w\m\h\l\2\b\o\9\j\o\1\c\a\c\j\m\m\1\6\n\b\d\t\q\h\2\8\3\l\9\2\f\c\3\5\f\r\t\p\1\x\a\e\1\k\5\z\w\0\1\s\7\3\l\u\h\l\i\4\d\y\6\v\5\o\i\5\j\w\h\j\b\9\m\9\t\w\v\v\r\c\0\o\g\5\o\t\d\f\7\5\v\d\f\i\3\c\5\0\m\y\d\6\u\k\g\4\y\0\x\1\g\k\2\p\7\a\n\a\g\c\q\0\y\g\b\u\g\7\k\r\2\g\t\c\3\a\5\7\n\y\p\p\g\0\0\z\l\7\h\j\n\1\e\s\l\p\7\a\n\r\5\8\l\n\k\h\k\9\f\v\q\b\b\v\c\v\5\k\y\h\k\y\o\5\t\l\2\m\p\h\l\4\b\2\0\y\8\j\f\9\8\0\x\7\i\v\e\e\9\1\2\6\u\y\w\o\s\y\w\m\5\i\c\w\5\w\h\a\t\v\5\l\b\7\3\1\q\n\3\e\p\d\a\c\y\3\c\8\i\m\t\9\b\x\f\z\x\4\r\4\7\2\l\d\0\l\e\p\h\1\r\z\9\o\d\3\g\9\a\a\7\3\p\h\s\8\i\y\l\3\p\w\s\t\e\y\p\v\d\n\6\i\k\3\q\q\x\c\k\a\f\6\w\s\8\u\j\k\z\r\3\s\x\i\5\q\m\7\w\5\h\w\x\e\p\o\o\x\b\f\h\q\e\f\r\1\r\i\o\2\m\c\8\b\o\s\h\i\c\5\h\4\n\d\3\o\r\b\y\g\4\i\w\5\y\b\p\h\e\5\5\h\8\p\t\d\y\n\q\q\p\u\j\w\s\v\c\q\z\1\f\f\1\y\t\o\x\8\0\o\l\9\g\e\7\d\k\4\l\o\v\f\u\7\h\b\z\j\4\7\2\z\r\c\3\l\2\6\a\5\b\k\5\q\t\o\a\r\c\d\e\u\t\e\b\e\f\e\i\4\x\7\g\o\w\y\e\i\f\c\k\g\u\j\o\t\d\s\x\d\z\8\y ]] 00:06:57.653 22:18:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:57.653 22:18:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:57.653 [2024-07-15 22:18:11.126448] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:57.653 [2024-07-15 22:18:11.126520] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63404 ] 00:06:57.653 [2024-07-15 22:18:11.271837] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.911 [2024-07-15 22:18:11.421866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.911 [2024-07-15 22:18:11.494964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.425  Copying: 512/512 [B] (average 125 kBps) 00:06:58.425 00:06:58.425 22:18:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qz7ojjmacvqel1gob84ud5bf25cxwmhl2bo9jo1cacjmm16nbdtqh283l92fc35frtp1xae1k5zw01s73luhli4dy6v5oi5jwhjb9m9twvvrc0og5otdf75vdfi3c50myd6ukg4y0x1gk2p7anagcq0ygbug7kr2gtc3a57nyppg00zl7hjn1eslp7anr58lnkhk9fvqbbvcv5kyhkyo5tl2mphl4b20y8jf980x7ivee9126uywosywm5icw5whatv5lb731qn3epdacy3c8imt9bxfzx4r472ld0leph1rz9od3g9aa73phs8iyl3pwsteypvdn6ik3qqxckaf6ws8ujkzr3sxi5qm7w5hwxepooxbfhqefr1rio2mc8boshic5h4nd3orbyg4iw5ybphe55h8ptdynqqpujwsvcqz1ff1ytox80ol9ge7dk4lovfu7hbzj472zrc3l26a5bk5qtoarcdeutebefei4x7gowyeifckgujotdsxdz8y == \q\z\7\o\j\j\m\a\c\v\q\e\l\1\g\o\b\8\4\u\d\5\b\f\2\5\c\x\w\m\h\l\2\b\o\9\j\o\1\c\a\c\j\m\m\1\6\n\b\d\t\q\h\2\8\3\l\9\2\f\c\3\5\f\r\t\p\1\x\a\e\1\k\5\z\w\0\1\s\7\3\l\u\h\l\i\4\d\y\6\v\5\o\i\5\j\w\h\j\b\9\m\9\t\w\v\v\r\c\0\o\g\5\o\t\d\f\7\5\v\d\f\i\3\c\5\0\m\y\d\6\u\k\g\4\y\0\x\1\g\k\2\p\7\a\n\a\g\c\q\0\y\g\b\u\g\7\k\r\2\g\t\c\3\a\5\7\n\y\p\p\g\0\0\z\l\7\h\j\n\1\e\s\l\p\7\a\n\r\5\8\l\n\k\h\k\9\f\v\q\b\b\v\c\v\5\k\y\h\k\y\o\5\t\l\2\m\p\h\l\4\b\2\0\y\8\j\f\9\8\0\x\7\i\v\e\e\9\1\2\6\u\y\w\o\s\y\w\m\5\i\c\w\5\w\h\a\t\v\5\l\b\7\3\1\q\n\3\e\p\d\a\c\y\3\c\8\i\m\t\9\b\x\f\z\x\4\r\4\7\2\l\d\0\l\e\p\h\1\r\z\9\o\d\3\g\9\a\a\7\3\p\h\s\8\i\y\l\3\p\w\s\t\e\y\p\v\d\n\6\i\k\3\q\q\x\c\k\a\f\6\w\s\8\u\j\k\z\r\3\s\x\i\5\q\m\7\w\5\h\w\x\e\p\o\o\x\b\f\h\q\e\f\r\1\r\i\o\2\m\c\8\b\o\s\h\i\c\5\h\4\n\d\3\o\r\b\y\g\4\i\w\5\y\b\p\h\e\5\5\h\8\p\t\d\y\n\q\q\p\u\j\w\s\v\c\q\z\1\f\f\1\y\t\o\x\8\0\o\l\9\g\e\7\d\k\4\l\o\v\f\u\7\h\b\z\j\4\7\2\z\r\c\3\l\2\6\a\5\b\k\5\q\t\o\a\r\c\d\e\u\t\e\b\e\f\e\i\4\x\7\g\o\w\y\e\i\f\c\k\g\u\j\o\t\d\s\x\d\z\8\y ]] 00:06:58.425 22:18:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:58.425 22:18:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:58.425 [2024-07-15 22:18:11.894361] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:58.425 [2024-07-15 22:18:11.894445] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63419 ] 00:06:58.425 [2024-07-15 22:18:12.037554] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.695 [2024-07-15 22:18:12.193068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.695 [2024-07-15 22:18:12.266618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.267  Copying: 512/512 [B] (average 166 kBps) 00:06:59.267 00:06:59.267 22:18:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qz7ojjmacvqel1gob84ud5bf25cxwmhl2bo9jo1cacjmm16nbdtqh283l92fc35frtp1xae1k5zw01s73luhli4dy6v5oi5jwhjb9m9twvvrc0og5otdf75vdfi3c50myd6ukg4y0x1gk2p7anagcq0ygbug7kr2gtc3a57nyppg00zl7hjn1eslp7anr58lnkhk9fvqbbvcv5kyhkyo5tl2mphl4b20y8jf980x7ivee9126uywosywm5icw5whatv5lb731qn3epdacy3c8imt9bxfzx4r472ld0leph1rz9od3g9aa73phs8iyl3pwsteypvdn6ik3qqxckaf6ws8ujkzr3sxi5qm7w5hwxepooxbfhqefr1rio2mc8boshic5h4nd3orbyg4iw5ybphe55h8ptdynqqpujwsvcqz1ff1ytox80ol9ge7dk4lovfu7hbzj472zrc3l26a5bk5qtoarcdeutebefei4x7gowyeifckgujotdsxdz8y == \q\z\7\o\j\j\m\a\c\v\q\e\l\1\g\o\b\8\4\u\d\5\b\f\2\5\c\x\w\m\h\l\2\b\o\9\j\o\1\c\a\c\j\m\m\1\6\n\b\d\t\q\h\2\8\3\l\9\2\f\c\3\5\f\r\t\p\1\x\a\e\1\k\5\z\w\0\1\s\7\3\l\u\h\l\i\4\d\y\6\v\5\o\i\5\j\w\h\j\b\9\m\9\t\w\v\v\r\c\0\o\g\5\o\t\d\f\7\5\v\d\f\i\3\c\5\0\m\y\d\6\u\k\g\4\y\0\x\1\g\k\2\p\7\a\n\a\g\c\q\0\y\g\b\u\g\7\k\r\2\g\t\c\3\a\5\7\n\y\p\p\g\0\0\z\l\7\h\j\n\1\e\s\l\p\7\a\n\r\5\8\l\n\k\h\k\9\f\v\q\b\b\v\c\v\5\k\y\h\k\y\o\5\t\l\2\m\p\h\l\4\b\2\0\y\8\j\f\9\8\0\x\7\i\v\e\e\9\1\2\6\u\y\w\o\s\y\w\m\5\i\c\w\5\w\h\a\t\v\5\l\b\7\3\1\q\n\3\e\p\d\a\c\y\3\c\8\i\m\t\9\b\x\f\z\x\4\r\4\7\2\l\d\0\l\e\p\h\1\r\z\9\o\d\3\g\9\a\a\7\3\p\h\s\8\i\y\l\3\p\w\s\t\e\y\p\v\d\n\6\i\k\3\q\q\x\c\k\a\f\6\w\s\8\u\j\k\z\r\3\s\x\i\5\q\m\7\w\5\h\w\x\e\p\o\o\x\b\f\h\q\e\f\r\1\r\i\o\2\m\c\8\b\o\s\h\i\c\5\h\4\n\d\3\o\r\b\y\g\4\i\w\5\y\b\p\h\e\5\5\h\8\p\t\d\y\n\q\q\p\u\j\w\s\v\c\q\z\1\f\f\1\y\t\o\x\8\0\o\l\9\g\e\7\d\k\4\l\o\v\f\u\7\h\b\z\j\4\7\2\z\r\c\3\l\2\6\a\5\b\k\5\q\t\o\a\r\c\d\e\u\t\e\b\e\f\e\i\4\x\7\g\o\w\y\e\i\f\c\k\g\u\j\o\t\d\s\x\d\z\8\y ]] 00:06:59.267 00:06:59.267 real 0m6.149s 00:06:59.267 user 0m3.727s 00:06:59.267 sys 0m2.981s 00:06:59.267 ************************************ 00:06:59.267 END TEST dd_flags_misc 00:06:59.267 ************************************ 00:06:59.267 22:18:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.267 22:18:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:59.267 22:18:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:59.267 22:18:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:59.267 22:18:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:59.267 * Second test run, disabling liburing, forcing AIO 00:06:59.267 22:18:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:59.267 22:18:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:59.267 22:18:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.267 22:18:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.267 22:18:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:59.267 ************************************ 00:06:59.267 START TEST dd_flag_append_forced_aio 00:06:59.267 ************************************ 00:06:59.267 22:18:12 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:06:59.267 22:18:12 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:59.267 22:18:12 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:59.267 22:18:12 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:59.267 22:18:12 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:59.267 22:18:12 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:59.267 22:18:12 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=uu7pcgskf83kqfd8lm7x4200ylbncp0n 00:06:59.267 22:18:12 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:59.267 22:18:12 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:59.267 22:18:12 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:59.267 22:18:12 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=gt8mm5rurwrl5o14lvljooen08jkgam6 00:06:59.267 22:18:12 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s uu7pcgskf83kqfd8lm7x4200ylbncp0n 00:06:59.267 22:18:12 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s gt8mm5rurwrl5o14lvljooen08jkgam6 00:06:59.267 22:18:12 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:59.267 [2024-07-15 22:18:12.767045] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:59.267 [2024-07-15 22:18:12.767139] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63453 ] 00:06:59.524 [2024-07-15 22:18:12.904686] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.524 [2024-07-15 22:18:13.066557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.524 [2024-07-15 22:18:13.149680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:00.042  Copying: 32/32 [B] (average 31 kBps) 00:07:00.042 00:07:00.042 22:18:13 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ gt8mm5rurwrl5o14lvljooen08jkgam6uu7pcgskf83kqfd8lm7x4200ylbncp0n == \g\t\8\m\m\5\r\u\r\w\r\l\5\o\1\4\l\v\l\j\o\o\e\n\0\8\j\k\g\a\m\6\u\u\7\p\c\g\s\k\f\8\3\k\q\f\d\8\l\m\7\x\4\2\0\0\y\l\b\n\c\p\0\n ]] 00:07:00.042 00:07:00.042 real 0m0.819s 00:07:00.042 user 0m0.470s 00:07:00.042 sys 0m0.225s 00:07:00.042 ************************************ 00:07:00.042 END TEST dd_flag_append_forced_aio 00:07:00.042 ************************************ 00:07:00.042 22:18:13 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.042 22:18:13 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:00.042 22:18:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:00.042 22:18:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:00.042 22:18:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:00.042 22:18:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.042 22:18:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:00.042 ************************************ 00:07:00.042 START TEST dd_flag_directory_forced_aio 00:07:00.042 ************************************ 00:07:00.042 22:18:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:07:00.042 22:18:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:00.042 22:18:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:00.042 22:18:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:00.042 22:18:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.042 22:18:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.042 22:18:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.042 22:18:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.042 22:18:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.042 22:18:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.042 22:18:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.042 22:18:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:00.043 22:18:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:00.043 [2024-07-15 22:18:13.656903] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:00.043 [2024-07-15 22:18:13.656999] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63485 ] 00:07:00.300 [2024-07-15 22:18:13.800122] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.557 [2024-07-15 22:18:13.950155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.557 [2024-07-15 22:18:14.023333] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:00.557 [2024-07-15 22:18:14.070373] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:00.557 [2024-07-15 22:18:14.070434] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:00.557 [2024-07-15 22:18:14.070449] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:00.813 [2024-07-15 22:18:14.239253] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:00.813 22:18:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:07:00.813 22:18:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:00.813 22:18:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:07:00.813 22:18:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:00.813 22:18:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:00.813 22:18:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:00.813 22:18:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:00.813 22:18:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:00.813 22:18:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:00.813 22:18:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.813 22:18:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.813 22:18:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.813 22:18:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.813 22:18:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.813 22:18:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.813 22:18:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.813 22:18:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:00.813 22:18:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:00.813 [2024-07-15 22:18:14.427751] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:00.813 [2024-07-15 22:18:14.427830] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63500 ] 00:07:01.070 [2024-07-15 22:18:14.571231] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.330 [2024-07-15 22:18:14.720633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.330 [2024-07-15 22:18:14.793385] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.330 [2024-07-15 22:18:14.839977] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:01.330 [2024-07-15 22:18:14.840039] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:01.330 [2024-07-15 22:18:14.840053] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.588 [2024-07-15 22:18:15.004739] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:01.588 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:07:01.588 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:01.588 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:07:01.588 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:01.588 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:01.588 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:01.588 00:07:01.588 real 0m1.541s 00:07:01.588 user 0m0.942s 00:07:01.588 sys 0m0.388s 00:07:01.588 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.588 ************************************ 00:07:01.588 END TEST dd_flag_directory_forced_aio 00:07:01.588 ************************************ 00:07:01.588 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:01.588 22:18:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:01.588 22:18:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:01.588 22:18:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:01.588 22:18:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.588 22:18:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:01.588 ************************************ 00:07:01.588 START TEST dd_flag_nofollow_forced_aio 00:07:01.588 ************************************ 00:07:01.588 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:07:01.588 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:01.588 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:01.588 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:01.588 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:01.846 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.846 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:01.846 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.846 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.846 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.846 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.846 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.846 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.846 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.846 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.846 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.846 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.846 [2024-07-15 22:18:15.280873] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:01.846 [2024-07-15 22:18:15.280950] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63523 ] 00:07:01.846 [2024-07-15 22:18:15.424220] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.103 [2024-07-15 22:18:15.575772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.103 [2024-07-15 22:18:15.650272] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.103 [2024-07-15 22:18:15.698817] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:02.103 [2024-07-15 22:18:15.698885] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:02.103 [2024-07-15 22:18:15.698902] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.367 [2024-07-15 22:18:15.861888] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:02.367 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:07:02.367 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:02.367 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:07:02.367 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:02.367 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:02.367 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:02.367 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:02.367 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:02.367 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:02.367 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.367 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.367 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.367 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.367 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.626 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.626 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.626 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:02.626 22:18:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:02.626 [2024-07-15 22:18:16.050129] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:02.626 [2024-07-15 22:18:16.050212] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63538 ] 00:07:02.626 [2024-07-15 22:18:16.195158] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.905 [2024-07-15 22:18:16.344665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.905 [2024-07-15 22:18:16.418474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.905 [2024-07-15 22:18:16.466070] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:02.905 [2024-07-15 22:18:16.466368] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:02.905 [2024-07-15 22:18:16.466536] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.164 [2024-07-15 22:18:16.632082] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:03.164 22:18:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:07:03.164 22:18:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:03.164 22:18:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:07:03.164 22:18:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:03.164 22:18:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:03.164 22:18:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:03.164 22:18:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:03.164 22:18:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:03.164 22:18:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:03.164 22:18:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:03.423 [2024-07-15 22:18:16.838422] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:03.423 [2024-07-15 22:18:16.838762] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63551 ] 00:07:03.423 [2024-07-15 22:18:16.988130] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.681 [2024-07-15 22:18:17.140106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.681 [2024-07-15 22:18:17.214207] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.939  Copying: 512/512 [B] (average 500 kBps) 00:07:03.940 00:07:03.940 22:18:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ x0c5o2ntfz4szugeanrgibu0zjtlctwr3yv11qk0zsbp47qhn7kwx2s986cl5mfsiujr7c2bcsd0a8llavrzxwqvcc8dabx2j498u0kq873vjqw6br753a8fg4nvb7xfgxsz4r7l0vt71mhfkbxtuwxdhwihexxm6styl83k8y7pm0ezb1ylrx1lwipj5nj5d6lsq83avcu4mrygp99wf01t7pyfd56hmgsatnd5iw54mqkwr1l8u3vj6seo807ee25uqp28cebjupc55xpdhvl4ekdj57gwyjpjeedflh1ijadkm1y4wg5jn2d87iip8uc7pon1q8alecidpt6staveot5pykr2kji80vyq1o1is2ic7id9s2v5jp70t7m6lxigg6o2fzvd5vbo4cfyy2v295yonvin2fgqyzccv5vw6ifxq4829y3b1sl0avqre4wdc9l5jlfzh2m7hgr9n75ukubyz0kbxgphigv7nk82xyuin9rh8ap9limeyaw2 == \x\0\c\5\o\2\n\t\f\z\4\s\z\u\g\e\a\n\r\g\i\b\u\0\z\j\t\l\c\t\w\r\3\y\v\1\1\q\k\0\z\s\b\p\4\7\q\h\n\7\k\w\x\2\s\9\8\6\c\l\5\m\f\s\i\u\j\r\7\c\2\b\c\s\d\0\a\8\l\l\a\v\r\z\x\w\q\v\c\c\8\d\a\b\x\2\j\4\9\8\u\0\k\q\8\7\3\v\j\q\w\6\b\r\7\5\3\a\8\f\g\4\n\v\b\7\x\f\g\x\s\z\4\r\7\l\0\v\t\7\1\m\h\f\k\b\x\t\u\w\x\d\h\w\i\h\e\x\x\m\6\s\t\y\l\8\3\k\8\y\7\p\m\0\e\z\b\1\y\l\r\x\1\l\w\i\p\j\5\n\j\5\d\6\l\s\q\8\3\a\v\c\u\4\m\r\y\g\p\9\9\w\f\0\1\t\7\p\y\f\d\5\6\h\m\g\s\a\t\n\d\5\i\w\5\4\m\q\k\w\r\1\l\8\u\3\v\j\6\s\e\o\8\0\7\e\e\2\5\u\q\p\2\8\c\e\b\j\u\p\c\5\5\x\p\d\h\v\l\4\e\k\d\j\5\7\g\w\y\j\p\j\e\e\d\f\l\h\1\i\j\a\d\k\m\1\y\4\w\g\5\j\n\2\d\8\7\i\i\p\8\u\c\7\p\o\n\1\q\8\a\l\e\c\i\d\p\t\6\s\t\a\v\e\o\t\5\p\y\k\r\2\k\j\i\8\0\v\y\q\1\o\1\i\s\2\i\c\7\i\d\9\s\2\v\5\j\p\7\0\t\7\m\6\l\x\i\g\g\6\o\2\f\z\v\d\5\v\b\o\4\c\f\y\y\2\v\2\9\5\y\o\n\v\i\n\2\f\g\q\y\z\c\c\v\5\v\w\6\i\f\x\q\4\8\2\9\y\3\b\1\s\l\0\a\v\q\r\e\4\w\d\c\9\l\5\j\l\f\z\h\2\m\7\h\g\r\9\n\7\5\u\k\u\b\y\z\0\k\b\x\g\p\h\i\g\v\7\n\k\8\2\x\y\u\i\n\9\r\h\8\a\p\9\l\i\m\e\y\a\w\2 ]] 00:07:03.940 00:07:03.940 real 0m2.361s 00:07:03.940 user 0m1.414s 00:07:03.940 sys 0m0.609s 00:07:03.940 22:18:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.940 22:18:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:04.198 ************************************ 00:07:04.198 END TEST dd_flag_nofollow_forced_aio 00:07:04.198 ************************************ 00:07:04.198 22:18:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:04.198 22:18:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:04.198 22:18:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:04.198 22:18:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.198 22:18:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:04.198 ************************************ 00:07:04.198 START TEST dd_flag_noatime_forced_aio 00:07:04.198 ************************************ 00:07:04.198 22:18:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:07:04.198 22:18:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:04.198 22:18:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:04.198 22:18:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:04.198 22:18:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:04.198 22:18:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:04.198 22:18:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:04.198 22:18:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721081897 00:07:04.198 22:18:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:04.198 22:18:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721081897 00:07:04.198 22:18:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:05.132 22:18:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.132 [2024-07-15 22:18:18.746757] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:05.132 [2024-07-15 22:18:18.746912] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63597 ] 00:07:05.392 [2024-07-15 22:18:18.897691] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.651 [2024-07-15 22:18:19.045905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.651 [2024-07-15 22:18:19.118443] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:05.910  Copying: 512/512 [B] (average 500 kBps) 00:07:05.910 00:07:05.910 22:18:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:05.910 22:18:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721081897 )) 00:07:05.910 22:18:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.910 22:18:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721081897 )) 00:07:05.910 22:18:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:06.169 [2024-07-15 22:18:19.546229] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:06.169 [2024-07-15 22:18:19.546322] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63609 ] 00:07:06.169 [2024-07-15 22:18:19.691293] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.428 [2024-07-15 22:18:19.842045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.428 [2024-07-15 22:18:19.914908] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.686  Copying: 512/512 [B] (average 500 kBps) 00:07:06.686 00:07:06.686 22:18:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:06.686 22:18:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721081899 )) 00:07:06.686 00:07:06.686 real 0m2.627s 00:07:06.686 user 0m0.949s 00:07:06.686 sys 0m0.430s 00:07:06.686 22:18:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.686 ************************************ 00:07:06.686 END TEST dd_flag_noatime_forced_aio 00:07:06.686 ************************************ 00:07:06.686 22:18:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:06.945 22:18:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:06.945 22:18:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:06.945 22:18:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.945 22:18:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.945 22:18:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:06.945 ************************************ 00:07:06.945 START TEST dd_flags_misc_forced_aio 00:07:06.945 ************************************ 00:07:06.945 22:18:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:07:06.945 22:18:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:06.945 22:18:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:06.945 22:18:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:06.945 22:18:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:06.945 22:18:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:06.945 22:18:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:06.945 22:18:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:06.945 22:18:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:06.945 22:18:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:06.945 [2024-07-15 22:18:20.424368] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:06.945 [2024-07-15 22:18:20.424456] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63635 ] 00:07:06.945 [2024-07-15 22:18:20.565507] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.204 [2024-07-15 22:18:20.713497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.204 [2024-07-15 22:18:20.787324] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:07.771  Copying: 512/512 [B] (average 500 kBps) 00:07:07.771 00:07:07.771 22:18:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ byrw0bdwc005mf7smg591i1j4g038top5u52dkpi4kg6gdbns9fzape2qapqrckxv66f2ujfdmgu0o0dmbbpbortedlpieypqhe7zo0rnifzp9kt77pn29aqlsgbfs7exthxt94nb3f3hdccalw8b2z5w41r06ocruz4uz8lu5bhqzaygdvgabof04cndiqr5t6rf4ll798xcu6qd0ca2mfelx55d4jlayjwqjbko2tobkytep58sikh68mz8pfrr8xmhdhocdl7csbeebb8y3o5y94bb9o06rdabsviv7y9ity600blhpn8ag52w1la1rm6c0re1ykuvhp7a1ib3cs224ph7x7bcg30bighd4f14xqos2q0c0l1bmb3ij6m8qrs6nicnibujemyvsy24z2f7i82bis112j8ud79xv2lp4405rkiw7ib254w9znk714dlbkabylf9hpggxw54i01gkyd3u9josev5iurw45785ub4bm2sh9ieadc69sp == \b\y\r\w\0\b\d\w\c\0\0\5\m\f\7\s\m\g\5\9\1\i\1\j\4\g\0\3\8\t\o\p\5\u\5\2\d\k\p\i\4\k\g\6\g\d\b\n\s\9\f\z\a\p\e\2\q\a\p\q\r\c\k\x\v\6\6\f\2\u\j\f\d\m\g\u\0\o\0\d\m\b\b\p\b\o\r\t\e\d\l\p\i\e\y\p\q\h\e\7\z\o\0\r\n\i\f\z\p\9\k\t\7\7\p\n\2\9\a\q\l\s\g\b\f\s\7\e\x\t\h\x\t\9\4\n\b\3\f\3\h\d\c\c\a\l\w\8\b\2\z\5\w\4\1\r\0\6\o\c\r\u\z\4\u\z\8\l\u\5\b\h\q\z\a\y\g\d\v\g\a\b\o\f\0\4\c\n\d\i\q\r\5\t\6\r\f\4\l\l\7\9\8\x\c\u\6\q\d\0\c\a\2\m\f\e\l\x\5\5\d\4\j\l\a\y\j\w\q\j\b\k\o\2\t\o\b\k\y\t\e\p\5\8\s\i\k\h\6\8\m\z\8\p\f\r\r\8\x\m\h\d\h\o\c\d\l\7\c\s\b\e\e\b\b\8\y\3\o\5\y\9\4\b\b\9\o\0\6\r\d\a\b\s\v\i\v\7\y\9\i\t\y\6\0\0\b\l\h\p\n\8\a\g\5\2\w\1\l\a\1\r\m\6\c\0\r\e\1\y\k\u\v\h\p\7\a\1\i\b\3\c\s\2\2\4\p\h\7\x\7\b\c\g\3\0\b\i\g\h\d\4\f\1\4\x\q\o\s\2\q\0\c\0\l\1\b\m\b\3\i\j\6\m\8\q\r\s\6\n\i\c\n\i\b\u\j\e\m\y\v\s\y\2\4\z\2\f\7\i\8\2\b\i\s\1\1\2\j\8\u\d\7\9\x\v\2\l\p\4\4\0\5\r\k\i\w\7\i\b\2\5\4\w\9\z\n\k\7\1\4\d\l\b\k\a\b\y\l\f\9\h\p\g\g\x\w\5\4\i\0\1\g\k\y\d\3\u\9\j\o\s\e\v\5\i\u\r\w\4\5\7\8\5\u\b\4\b\m\2\s\h\9\i\e\a\d\c\6\9\s\p ]] 00:07:07.771 22:18:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:07.771 22:18:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:07.771 [2024-07-15 22:18:21.195249] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:07.771 [2024-07-15 22:18:21.195328] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63649 ] 00:07:07.771 [2024-07-15 22:18:21.338237] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.030 [2024-07-15 22:18:21.487119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.030 [2024-07-15 22:18:21.559408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:08.289  Copying: 512/512 [B] (average 500 kBps) 00:07:08.289 00:07:08.548 22:18:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ byrw0bdwc005mf7smg591i1j4g038top5u52dkpi4kg6gdbns9fzape2qapqrckxv66f2ujfdmgu0o0dmbbpbortedlpieypqhe7zo0rnifzp9kt77pn29aqlsgbfs7exthxt94nb3f3hdccalw8b2z5w41r06ocruz4uz8lu5bhqzaygdvgabof04cndiqr5t6rf4ll798xcu6qd0ca2mfelx55d4jlayjwqjbko2tobkytep58sikh68mz8pfrr8xmhdhocdl7csbeebb8y3o5y94bb9o06rdabsviv7y9ity600blhpn8ag52w1la1rm6c0re1ykuvhp7a1ib3cs224ph7x7bcg30bighd4f14xqos2q0c0l1bmb3ij6m8qrs6nicnibujemyvsy24z2f7i82bis112j8ud79xv2lp4405rkiw7ib254w9znk714dlbkabylf9hpggxw54i01gkyd3u9josev5iurw45785ub4bm2sh9ieadc69sp == \b\y\r\w\0\b\d\w\c\0\0\5\m\f\7\s\m\g\5\9\1\i\1\j\4\g\0\3\8\t\o\p\5\u\5\2\d\k\p\i\4\k\g\6\g\d\b\n\s\9\f\z\a\p\e\2\q\a\p\q\r\c\k\x\v\6\6\f\2\u\j\f\d\m\g\u\0\o\0\d\m\b\b\p\b\o\r\t\e\d\l\p\i\e\y\p\q\h\e\7\z\o\0\r\n\i\f\z\p\9\k\t\7\7\p\n\2\9\a\q\l\s\g\b\f\s\7\e\x\t\h\x\t\9\4\n\b\3\f\3\h\d\c\c\a\l\w\8\b\2\z\5\w\4\1\r\0\6\o\c\r\u\z\4\u\z\8\l\u\5\b\h\q\z\a\y\g\d\v\g\a\b\o\f\0\4\c\n\d\i\q\r\5\t\6\r\f\4\l\l\7\9\8\x\c\u\6\q\d\0\c\a\2\m\f\e\l\x\5\5\d\4\j\l\a\y\j\w\q\j\b\k\o\2\t\o\b\k\y\t\e\p\5\8\s\i\k\h\6\8\m\z\8\p\f\r\r\8\x\m\h\d\h\o\c\d\l\7\c\s\b\e\e\b\b\8\y\3\o\5\y\9\4\b\b\9\o\0\6\r\d\a\b\s\v\i\v\7\y\9\i\t\y\6\0\0\b\l\h\p\n\8\a\g\5\2\w\1\l\a\1\r\m\6\c\0\r\e\1\y\k\u\v\h\p\7\a\1\i\b\3\c\s\2\2\4\p\h\7\x\7\b\c\g\3\0\b\i\g\h\d\4\f\1\4\x\q\o\s\2\q\0\c\0\l\1\b\m\b\3\i\j\6\m\8\q\r\s\6\n\i\c\n\i\b\u\j\e\m\y\v\s\y\2\4\z\2\f\7\i\8\2\b\i\s\1\1\2\j\8\u\d\7\9\x\v\2\l\p\4\4\0\5\r\k\i\w\7\i\b\2\5\4\w\9\z\n\k\7\1\4\d\l\b\k\a\b\y\l\f\9\h\p\g\g\x\w\5\4\i\0\1\g\k\y\d\3\u\9\j\o\s\e\v\5\i\u\r\w\4\5\7\8\5\u\b\4\b\m\2\s\h\9\i\e\a\d\c\6\9\s\p ]] 00:07:08.548 22:18:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:08.548 22:18:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:08.548 [2024-07-15 22:18:21.972990] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:08.548 [2024-07-15 22:18:21.973082] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63662 ] 00:07:08.548 [2024-07-15 22:18:22.116246] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.807 [2024-07-15 22:18:22.265536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.808 [2024-07-15 22:18:22.338254] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.066  Copying: 512/512 [B] (average 166 kBps) 00:07:09.066 00:07:09.066 22:18:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ byrw0bdwc005mf7smg591i1j4g038top5u52dkpi4kg6gdbns9fzape2qapqrckxv66f2ujfdmgu0o0dmbbpbortedlpieypqhe7zo0rnifzp9kt77pn29aqlsgbfs7exthxt94nb3f3hdccalw8b2z5w41r06ocruz4uz8lu5bhqzaygdvgabof04cndiqr5t6rf4ll798xcu6qd0ca2mfelx55d4jlayjwqjbko2tobkytep58sikh68mz8pfrr8xmhdhocdl7csbeebb8y3o5y94bb9o06rdabsviv7y9ity600blhpn8ag52w1la1rm6c0re1ykuvhp7a1ib3cs224ph7x7bcg30bighd4f14xqos2q0c0l1bmb3ij6m8qrs6nicnibujemyvsy24z2f7i82bis112j8ud79xv2lp4405rkiw7ib254w9znk714dlbkabylf9hpggxw54i01gkyd3u9josev5iurw45785ub4bm2sh9ieadc69sp == \b\y\r\w\0\b\d\w\c\0\0\5\m\f\7\s\m\g\5\9\1\i\1\j\4\g\0\3\8\t\o\p\5\u\5\2\d\k\p\i\4\k\g\6\g\d\b\n\s\9\f\z\a\p\e\2\q\a\p\q\r\c\k\x\v\6\6\f\2\u\j\f\d\m\g\u\0\o\0\d\m\b\b\p\b\o\r\t\e\d\l\p\i\e\y\p\q\h\e\7\z\o\0\r\n\i\f\z\p\9\k\t\7\7\p\n\2\9\a\q\l\s\g\b\f\s\7\e\x\t\h\x\t\9\4\n\b\3\f\3\h\d\c\c\a\l\w\8\b\2\z\5\w\4\1\r\0\6\o\c\r\u\z\4\u\z\8\l\u\5\b\h\q\z\a\y\g\d\v\g\a\b\o\f\0\4\c\n\d\i\q\r\5\t\6\r\f\4\l\l\7\9\8\x\c\u\6\q\d\0\c\a\2\m\f\e\l\x\5\5\d\4\j\l\a\y\j\w\q\j\b\k\o\2\t\o\b\k\y\t\e\p\5\8\s\i\k\h\6\8\m\z\8\p\f\r\r\8\x\m\h\d\h\o\c\d\l\7\c\s\b\e\e\b\b\8\y\3\o\5\y\9\4\b\b\9\o\0\6\r\d\a\b\s\v\i\v\7\y\9\i\t\y\6\0\0\b\l\h\p\n\8\a\g\5\2\w\1\l\a\1\r\m\6\c\0\r\e\1\y\k\u\v\h\p\7\a\1\i\b\3\c\s\2\2\4\p\h\7\x\7\b\c\g\3\0\b\i\g\h\d\4\f\1\4\x\q\o\s\2\q\0\c\0\l\1\b\m\b\3\i\j\6\m\8\q\r\s\6\n\i\c\n\i\b\u\j\e\m\y\v\s\y\2\4\z\2\f\7\i\8\2\b\i\s\1\1\2\j\8\u\d\7\9\x\v\2\l\p\4\4\0\5\r\k\i\w\7\i\b\2\5\4\w\9\z\n\k\7\1\4\d\l\b\k\a\b\y\l\f\9\h\p\g\g\x\w\5\4\i\0\1\g\k\y\d\3\u\9\j\o\s\e\v\5\i\u\r\w\4\5\7\8\5\u\b\4\b\m\2\s\h\9\i\e\a\d\c\6\9\s\p ]] 00:07:09.066 22:18:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:09.066 22:18:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:09.325 [2024-07-15 22:18:22.748702] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:09.325 [2024-07-15 22:18:22.748778] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63670 ] 00:07:09.325 [2024-07-15 22:18:22.893798] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.587 [2024-07-15 22:18:23.041647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.587 [2024-07-15 22:18:23.113991] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.846  Copying: 512/512 [B] (average 250 kBps) 00:07:09.846 00:07:09.846 22:18:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ byrw0bdwc005mf7smg591i1j4g038top5u52dkpi4kg6gdbns9fzape2qapqrckxv66f2ujfdmgu0o0dmbbpbortedlpieypqhe7zo0rnifzp9kt77pn29aqlsgbfs7exthxt94nb3f3hdccalw8b2z5w41r06ocruz4uz8lu5bhqzaygdvgabof04cndiqr5t6rf4ll798xcu6qd0ca2mfelx55d4jlayjwqjbko2tobkytep58sikh68mz8pfrr8xmhdhocdl7csbeebb8y3o5y94bb9o06rdabsviv7y9ity600blhpn8ag52w1la1rm6c0re1ykuvhp7a1ib3cs224ph7x7bcg30bighd4f14xqos2q0c0l1bmb3ij6m8qrs6nicnibujemyvsy24z2f7i82bis112j8ud79xv2lp4405rkiw7ib254w9znk714dlbkabylf9hpggxw54i01gkyd3u9josev5iurw45785ub4bm2sh9ieadc69sp == \b\y\r\w\0\b\d\w\c\0\0\5\m\f\7\s\m\g\5\9\1\i\1\j\4\g\0\3\8\t\o\p\5\u\5\2\d\k\p\i\4\k\g\6\g\d\b\n\s\9\f\z\a\p\e\2\q\a\p\q\r\c\k\x\v\6\6\f\2\u\j\f\d\m\g\u\0\o\0\d\m\b\b\p\b\o\r\t\e\d\l\p\i\e\y\p\q\h\e\7\z\o\0\r\n\i\f\z\p\9\k\t\7\7\p\n\2\9\a\q\l\s\g\b\f\s\7\e\x\t\h\x\t\9\4\n\b\3\f\3\h\d\c\c\a\l\w\8\b\2\z\5\w\4\1\r\0\6\o\c\r\u\z\4\u\z\8\l\u\5\b\h\q\z\a\y\g\d\v\g\a\b\o\f\0\4\c\n\d\i\q\r\5\t\6\r\f\4\l\l\7\9\8\x\c\u\6\q\d\0\c\a\2\m\f\e\l\x\5\5\d\4\j\l\a\y\j\w\q\j\b\k\o\2\t\o\b\k\y\t\e\p\5\8\s\i\k\h\6\8\m\z\8\p\f\r\r\8\x\m\h\d\h\o\c\d\l\7\c\s\b\e\e\b\b\8\y\3\o\5\y\9\4\b\b\9\o\0\6\r\d\a\b\s\v\i\v\7\y\9\i\t\y\6\0\0\b\l\h\p\n\8\a\g\5\2\w\1\l\a\1\r\m\6\c\0\r\e\1\y\k\u\v\h\p\7\a\1\i\b\3\c\s\2\2\4\p\h\7\x\7\b\c\g\3\0\b\i\g\h\d\4\f\1\4\x\q\o\s\2\q\0\c\0\l\1\b\m\b\3\i\j\6\m\8\q\r\s\6\n\i\c\n\i\b\u\j\e\m\y\v\s\y\2\4\z\2\f\7\i\8\2\b\i\s\1\1\2\j\8\u\d\7\9\x\v\2\l\p\4\4\0\5\r\k\i\w\7\i\b\2\5\4\w\9\z\n\k\7\1\4\d\l\b\k\a\b\y\l\f\9\h\p\g\g\x\w\5\4\i\0\1\g\k\y\d\3\u\9\j\o\s\e\v\5\i\u\r\w\4\5\7\8\5\u\b\4\b\m\2\s\h\9\i\e\a\d\c\6\9\s\p ]] 00:07:09.846 22:18:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:09.846 22:18:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:09.846 22:18:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:09.846 22:18:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:10.103 22:18:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:10.103 22:18:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:10.103 [2024-07-15 22:18:23.536516] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:10.104 [2024-07-15 22:18:23.537078] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63678 ] 00:07:10.104 [2024-07-15 22:18:23.678703] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.390 [2024-07-15 22:18:23.826708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.390 [2024-07-15 22:18:23.899484] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:10.668  Copying: 512/512 [B] (average 500 kBps) 00:07:10.668 00:07:10.668 22:18:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ izak872cxmb04df4gzee9hrdruf2akoablvtk6lusqonq2im8p9ewr6iy3qyn3daz30wn4l1qrba9rbupxrzw9i164hfqgdrfbpe98wnwb1crl0o7k170uu0w4sa92hbqbborb631npfks10gc23apwpnpeqn1r0crl5j85fuxxhc6xgzcewyaomzc6w7vo9vmusz3tfj0w2lw5udo4x09igb97fd437a6073256gu4nk4vgs5m0ro6in0pdwzae2o0nshozev8h6auivs0u20xj02jng012sl6dz9tllv34bqho6sx0r9cqtv5o3h6iwpk2pfcmgfn1ait20woij7zjqu603n3yp51t54ftiwcxt8mdreqm38n2zodv8mxvj9zpjso5q3r96qxia2glwzg41hfe8df1xxu4lf5yjce2mnoq1648g3u70plq6wptij9dlqnlbvc6twl590bffw04088sxr6942ngajg61en089tsmz0bqa6i2ft90pi9 == \i\z\a\k\8\7\2\c\x\m\b\0\4\d\f\4\g\z\e\e\9\h\r\d\r\u\f\2\a\k\o\a\b\l\v\t\k\6\l\u\s\q\o\n\q\2\i\m\8\p\9\e\w\r\6\i\y\3\q\y\n\3\d\a\z\3\0\w\n\4\l\1\q\r\b\a\9\r\b\u\p\x\r\z\w\9\i\1\6\4\h\f\q\g\d\r\f\b\p\e\9\8\w\n\w\b\1\c\r\l\0\o\7\k\1\7\0\u\u\0\w\4\s\a\9\2\h\b\q\b\b\o\r\b\6\3\1\n\p\f\k\s\1\0\g\c\2\3\a\p\w\p\n\p\e\q\n\1\r\0\c\r\l\5\j\8\5\f\u\x\x\h\c\6\x\g\z\c\e\w\y\a\o\m\z\c\6\w\7\v\o\9\v\m\u\s\z\3\t\f\j\0\w\2\l\w\5\u\d\o\4\x\0\9\i\g\b\9\7\f\d\4\3\7\a\6\0\7\3\2\5\6\g\u\4\n\k\4\v\g\s\5\m\0\r\o\6\i\n\0\p\d\w\z\a\e\2\o\0\n\s\h\o\z\e\v\8\h\6\a\u\i\v\s\0\u\2\0\x\j\0\2\j\n\g\0\1\2\s\l\6\d\z\9\t\l\l\v\3\4\b\q\h\o\6\s\x\0\r\9\c\q\t\v\5\o\3\h\6\i\w\p\k\2\p\f\c\m\g\f\n\1\a\i\t\2\0\w\o\i\j\7\z\j\q\u\6\0\3\n\3\y\p\5\1\t\5\4\f\t\i\w\c\x\t\8\m\d\r\e\q\m\3\8\n\2\z\o\d\v\8\m\x\v\j\9\z\p\j\s\o\5\q\3\r\9\6\q\x\i\a\2\g\l\w\z\g\4\1\h\f\e\8\d\f\1\x\x\u\4\l\f\5\y\j\c\e\2\m\n\o\q\1\6\4\8\g\3\u\7\0\p\l\q\6\w\p\t\i\j\9\d\l\q\n\l\b\v\c\6\t\w\l\5\9\0\b\f\f\w\0\4\0\8\8\s\x\r\6\9\4\2\n\g\a\j\g\6\1\e\n\0\8\9\t\s\m\z\0\b\q\a\6\i\2\f\t\9\0\p\i\9 ]] 00:07:10.668 22:18:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:10.668 22:18:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:10.926 [2024-07-15 22:18:24.303218] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:10.926 [2024-07-15 22:18:24.303292] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63690 ] 00:07:10.926 [2024-07-15 22:18:24.446256] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.185 [2024-07-15 22:18:24.595106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.185 [2024-07-15 22:18:24.667553] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:11.444  Copying: 512/512 [B] (average 500 kBps) 00:07:11.444 00:07:11.444 22:18:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ izak872cxmb04df4gzee9hrdruf2akoablvtk6lusqonq2im8p9ewr6iy3qyn3daz30wn4l1qrba9rbupxrzw9i164hfqgdrfbpe98wnwb1crl0o7k170uu0w4sa92hbqbborb631npfks10gc23apwpnpeqn1r0crl5j85fuxxhc6xgzcewyaomzc6w7vo9vmusz3tfj0w2lw5udo4x09igb97fd437a6073256gu4nk4vgs5m0ro6in0pdwzae2o0nshozev8h6auivs0u20xj02jng012sl6dz9tllv34bqho6sx0r9cqtv5o3h6iwpk2pfcmgfn1ait20woij7zjqu603n3yp51t54ftiwcxt8mdreqm38n2zodv8mxvj9zpjso5q3r96qxia2glwzg41hfe8df1xxu4lf5yjce2mnoq1648g3u70plq6wptij9dlqnlbvc6twl590bffw04088sxr6942ngajg61en089tsmz0bqa6i2ft90pi9 == \i\z\a\k\8\7\2\c\x\m\b\0\4\d\f\4\g\z\e\e\9\h\r\d\r\u\f\2\a\k\o\a\b\l\v\t\k\6\l\u\s\q\o\n\q\2\i\m\8\p\9\e\w\r\6\i\y\3\q\y\n\3\d\a\z\3\0\w\n\4\l\1\q\r\b\a\9\r\b\u\p\x\r\z\w\9\i\1\6\4\h\f\q\g\d\r\f\b\p\e\9\8\w\n\w\b\1\c\r\l\0\o\7\k\1\7\0\u\u\0\w\4\s\a\9\2\h\b\q\b\b\o\r\b\6\3\1\n\p\f\k\s\1\0\g\c\2\3\a\p\w\p\n\p\e\q\n\1\r\0\c\r\l\5\j\8\5\f\u\x\x\h\c\6\x\g\z\c\e\w\y\a\o\m\z\c\6\w\7\v\o\9\v\m\u\s\z\3\t\f\j\0\w\2\l\w\5\u\d\o\4\x\0\9\i\g\b\9\7\f\d\4\3\7\a\6\0\7\3\2\5\6\g\u\4\n\k\4\v\g\s\5\m\0\r\o\6\i\n\0\p\d\w\z\a\e\2\o\0\n\s\h\o\z\e\v\8\h\6\a\u\i\v\s\0\u\2\0\x\j\0\2\j\n\g\0\1\2\s\l\6\d\z\9\t\l\l\v\3\4\b\q\h\o\6\s\x\0\r\9\c\q\t\v\5\o\3\h\6\i\w\p\k\2\p\f\c\m\g\f\n\1\a\i\t\2\0\w\o\i\j\7\z\j\q\u\6\0\3\n\3\y\p\5\1\t\5\4\f\t\i\w\c\x\t\8\m\d\r\e\q\m\3\8\n\2\z\o\d\v\8\m\x\v\j\9\z\p\j\s\o\5\q\3\r\9\6\q\x\i\a\2\g\l\w\z\g\4\1\h\f\e\8\d\f\1\x\x\u\4\l\f\5\y\j\c\e\2\m\n\o\q\1\6\4\8\g\3\u\7\0\p\l\q\6\w\p\t\i\j\9\d\l\q\n\l\b\v\c\6\t\w\l\5\9\0\b\f\f\w\0\4\0\8\8\s\x\r\6\9\4\2\n\g\a\j\g\6\1\e\n\0\8\9\t\s\m\z\0\b\q\a\6\i\2\f\t\9\0\p\i\9 ]] 00:07:11.444 22:18:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:11.444 22:18:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:11.444 [2024-07-15 22:18:25.072277] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:11.444 [2024-07-15 22:18:25.072350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63698 ] 00:07:11.702 [2024-07-15 22:18:25.215303] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.961 [2024-07-15 22:18:25.360252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.961 [2024-07-15 22:18:25.433245] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.219  Copying: 512/512 [B] (average 500 kBps) 00:07:12.219 00:07:12.219 22:18:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ izak872cxmb04df4gzee9hrdruf2akoablvtk6lusqonq2im8p9ewr6iy3qyn3daz30wn4l1qrba9rbupxrzw9i164hfqgdrfbpe98wnwb1crl0o7k170uu0w4sa92hbqbborb631npfks10gc23apwpnpeqn1r0crl5j85fuxxhc6xgzcewyaomzc6w7vo9vmusz3tfj0w2lw5udo4x09igb97fd437a6073256gu4nk4vgs5m0ro6in0pdwzae2o0nshozev8h6auivs0u20xj02jng012sl6dz9tllv34bqho6sx0r9cqtv5o3h6iwpk2pfcmgfn1ait20woij7zjqu603n3yp51t54ftiwcxt8mdreqm38n2zodv8mxvj9zpjso5q3r96qxia2glwzg41hfe8df1xxu4lf5yjce2mnoq1648g3u70plq6wptij9dlqnlbvc6twl590bffw04088sxr6942ngajg61en089tsmz0bqa6i2ft90pi9 == \i\z\a\k\8\7\2\c\x\m\b\0\4\d\f\4\g\z\e\e\9\h\r\d\r\u\f\2\a\k\o\a\b\l\v\t\k\6\l\u\s\q\o\n\q\2\i\m\8\p\9\e\w\r\6\i\y\3\q\y\n\3\d\a\z\3\0\w\n\4\l\1\q\r\b\a\9\r\b\u\p\x\r\z\w\9\i\1\6\4\h\f\q\g\d\r\f\b\p\e\9\8\w\n\w\b\1\c\r\l\0\o\7\k\1\7\0\u\u\0\w\4\s\a\9\2\h\b\q\b\b\o\r\b\6\3\1\n\p\f\k\s\1\0\g\c\2\3\a\p\w\p\n\p\e\q\n\1\r\0\c\r\l\5\j\8\5\f\u\x\x\h\c\6\x\g\z\c\e\w\y\a\o\m\z\c\6\w\7\v\o\9\v\m\u\s\z\3\t\f\j\0\w\2\l\w\5\u\d\o\4\x\0\9\i\g\b\9\7\f\d\4\3\7\a\6\0\7\3\2\5\6\g\u\4\n\k\4\v\g\s\5\m\0\r\o\6\i\n\0\p\d\w\z\a\e\2\o\0\n\s\h\o\z\e\v\8\h\6\a\u\i\v\s\0\u\2\0\x\j\0\2\j\n\g\0\1\2\s\l\6\d\z\9\t\l\l\v\3\4\b\q\h\o\6\s\x\0\r\9\c\q\t\v\5\o\3\h\6\i\w\p\k\2\p\f\c\m\g\f\n\1\a\i\t\2\0\w\o\i\j\7\z\j\q\u\6\0\3\n\3\y\p\5\1\t\5\4\f\t\i\w\c\x\t\8\m\d\r\e\q\m\3\8\n\2\z\o\d\v\8\m\x\v\j\9\z\p\j\s\o\5\q\3\r\9\6\q\x\i\a\2\g\l\w\z\g\4\1\h\f\e\8\d\f\1\x\x\u\4\l\f\5\y\j\c\e\2\m\n\o\q\1\6\4\8\g\3\u\7\0\p\l\q\6\w\p\t\i\j\9\d\l\q\n\l\b\v\c\6\t\w\l\5\9\0\b\f\f\w\0\4\0\8\8\s\x\r\6\9\4\2\n\g\a\j\g\6\1\e\n\0\8\9\t\s\m\z\0\b\q\a\6\i\2\f\t\9\0\p\i\9 ]] 00:07:12.219 22:18:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:12.219 22:18:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:12.219 [2024-07-15 22:18:25.837636] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:12.219 [2024-07-15 22:18:25.837729] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63711 ] 00:07:12.478 [2024-07-15 22:18:25.982915] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.736 [2024-07-15 22:18:26.133391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.736 [2024-07-15 22:18:26.207552] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.995  Copying: 512/512 [B] (average 500 kBps) 00:07:12.995 00:07:12.995 22:18:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ izak872cxmb04df4gzee9hrdruf2akoablvtk6lusqonq2im8p9ewr6iy3qyn3daz30wn4l1qrba9rbupxrzw9i164hfqgdrfbpe98wnwb1crl0o7k170uu0w4sa92hbqbborb631npfks10gc23apwpnpeqn1r0crl5j85fuxxhc6xgzcewyaomzc6w7vo9vmusz3tfj0w2lw5udo4x09igb97fd437a6073256gu4nk4vgs5m0ro6in0pdwzae2o0nshozev8h6auivs0u20xj02jng012sl6dz9tllv34bqho6sx0r9cqtv5o3h6iwpk2pfcmgfn1ait20woij7zjqu603n3yp51t54ftiwcxt8mdreqm38n2zodv8mxvj9zpjso5q3r96qxia2glwzg41hfe8df1xxu4lf5yjce2mnoq1648g3u70plq6wptij9dlqnlbvc6twl590bffw04088sxr6942ngajg61en089tsmz0bqa6i2ft90pi9 == \i\z\a\k\8\7\2\c\x\m\b\0\4\d\f\4\g\z\e\e\9\h\r\d\r\u\f\2\a\k\o\a\b\l\v\t\k\6\l\u\s\q\o\n\q\2\i\m\8\p\9\e\w\r\6\i\y\3\q\y\n\3\d\a\z\3\0\w\n\4\l\1\q\r\b\a\9\r\b\u\p\x\r\z\w\9\i\1\6\4\h\f\q\g\d\r\f\b\p\e\9\8\w\n\w\b\1\c\r\l\0\o\7\k\1\7\0\u\u\0\w\4\s\a\9\2\h\b\q\b\b\o\r\b\6\3\1\n\p\f\k\s\1\0\g\c\2\3\a\p\w\p\n\p\e\q\n\1\r\0\c\r\l\5\j\8\5\f\u\x\x\h\c\6\x\g\z\c\e\w\y\a\o\m\z\c\6\w\7\v\o\9\v\m\u\s\z\3\t\f\j\0\w\2\l\w\5\u\d\o\4\x\0\9\i\g\b\9\7\f\d\4\3\7\a\6\0\7\3\2\5\6\g\u\4\n\k\4\v\g\s\5\m\0\r\o\6\i\n\0\p\d\w\z\a\e\2\o\0\n\s\h\o\z\e\v\8\h\6\a\u\i\v\s\0\u\2\0\x\j\0\2\j\n\g\0\1\2\s\l\6\d\z\9\t\l\l\v\3\4\b\q\h\o\6\s\x\0\r\9\c\q\t\v\5\o\3\h\6\i\w\p\k\2\p\f\c\m\g\f\n\1\a\i\t\2\0\w\o\i\j\7\z\j\q\u\6\0\3\n\3\y\p\5\1\t\5\4\f\t\i\w\c\x\t\8\m\d\r\e\q\m\3\8\n\2\z\o\d\v\8\m\x\v\j\9\z\p\j\s\o\5\q\3\r\9\6\q\x\i\a\2\g\l\w\z\g\4\1\h\f\e\8\d\f\1\x\x\u\4\l\f\5\y\j\c\e\2\m\n\o\q\1\6\4\8\g\3\u\7\0\p\l\q\6\w\p\t\i\j\9\d\l\q\n\l\b\v\c\6\t\w\l\5\9\0\b\f\f\w\0\4\0\8\8\s\x\r\6\9\4\2\n\g\a\j\g\6\1\e\n\0\8\9\t\s\m\z\0\b\q\a\6\i\2\f\t\9\0\p\i\9 ]] 00:07:12.995 00:07:12.995 real 0m6.211s 00:07:12.995 user 0m3.683s 00:07:12.996 sys 0m1.545s 00:07:12.996 22:18:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.996 ************************************ 00:07:12.996 END TEST dd_flags_misc_forced_aio 00:07:12.996 ************************************ 00:07:12.996 22:18:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:12.996 22:18:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:12.996 22:18:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:12.996 22:18:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:13.254 22:18:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:13.254 ************************************ 00:07:13.254 END TEST spdk_dd_posix 00:07:13.254 ************************************ 00:07:13.254 00:07:13.254 real 0m27.942s 00:07:13.254 user 0m15.268s 00:07:13.254 sys 0m9.144s 00:07:13.254 22:18:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.254 22:18:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:13.254 22:18:26 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:13.254 22:18:26 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:13.254 22:18:26 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.254 22:18:26 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.254 22:18:26 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:13.254 ************************************ 00:07:13.254 START TEST spdk_dd_malloc 00:07:13.254 ************************************ 00:07:13.254 22:18:26 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:13.254 * Looking for test storage... 00:07:13.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:13.254 22:18:26 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:13.254 22:18:26 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.254 22:18:26 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.254 22:18:26 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.254 22:18:26 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.255 22:18:26 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.255 22:18:26 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.255 22:18:26 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:13.255 22:18:26 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.255 22:18:26 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:13.255 22:18:26 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.255 22:18:26 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.255 22:18:26 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:13.255 ************************************ 00:07:13.255 START TEST dd_malloc_copy 00:07:13.255 ************************************ 00:07:13.255 22:18:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:07:13.255 22:18:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:13.255 22:18:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:13.255 22:18:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:13.255 22:18:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:13.255 22:18:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:13.255 22:18:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:13.255 22:18:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:13.255 22:18:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:13.255 22:18:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:13.255 22:18:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:13.513 [2024-07-15 22:18:26.904350] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:13.513 [2024-07-15 22:18:26.904430] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63785 ] 00:07:13.513 { 00:07:13.513 "subsystems": [ 00:07:13.513 { 00:07:13.513 "subsystem": "bdev", 00:07:13.513 "config": [ 00:07:13.513 { 00:07:13.513 "params": { 00:07:13.513 "block_size": 512, 00:07:13.513 "num_blocks": 1048576, 00:07:13.513 "name": "malloc0" 00:07:13.513 }, 00:07:13.513 "method": "bdev_malloc_create" 00:07:13.513 }, 00:07:13.513 { 00:07:13.513 "params": { 00:07:13.513 "block_size": 512, 00:07:13.513 "num_blocks": 1048576, 00:07:13.513 "name": "malloc1" 00:07:13.513 }, 00:07:13.513 "method": "bdev_malloc_create" 00:07:13.513 }, 00:07:13.513 { 00:07:13.513 "method": "bdev_wait_for_examine" 00:07:13.513 } 00:07:13.513 ] 00:07:13.513 } 00:07:13.513 ] 00:07:13.513 } 00:07:13.513 [2024-07-15 22:18:27.049935] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.771 [2024-07-15 22:18:27.149176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.771 [2024-07-15 22:18:27.191780] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:17.004  Copying: 250/512 [MB] (250 MBps) Copying: 499/512 [MB] (249 MBps) Copying: 512/512 [MB] (average 250 MBps) 00:07:17.004 00:07:17.004 22:18:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:17.004 22:18:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:17.004 22:18:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:17.004 22:18:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:17.004 [2024-07-15 22:18:30.409862] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:17.004 [2024-07-15 22:18:30.409942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63827 ] 00:07:17.004 { 00:07:17.004 "subsystems": [ 00:07:17.004 { 00:07:17.004 "subsystem": "bdev", 00:07:17.004 "config": [ 00:07:17.004 { 00:07:17.004 "params": { 00:07:17.004 "block_size": 512, 00:07:17.004 "num_blocks": 1048576, 00:07:17.004 "name": "malloc0" 00:07:17.004 }, 00:07:17.004 "method": "bdev_malloc_create" 00:07:17.004 }, 00:07:17.004 { 00:07:17.004 "params": { 00:07:17.004 "block_size": 512, 00:07:17.004 "num_blocks": 1048576, 00:07:17.004 "name": "malloc1" 00:07:17.004 }, 00:07:17.004 "method": "bdev_malloc_create" 00:07:17.004 }, 00:07:17.004 { 00:07:17.004 "method": "bdev_wait_for_examine" 00:07:17.004 } 00:07:17.004 ] 00:07:17.004 } 00:07:17.004 ] 00:07:17.004 } 00:07:17.004 [2024-07-15 22:18:30.552962] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.261 [2024-07-15 22:18:30.700449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.261 [2024-07-15 22:18:30.774732] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:20.813  Copying: 249/512 [MB] (249 MBps) Copying: 496/512 [MB] (247 MBps) Copying: 512/512 [MB] (average 248 MBps) 00:07:20.813 00:07:20.813 00:07:20.813 real 0m7.285s 00:07:20.813 user 0m6.157s 00:07:20.813 sys 0m0.958s 00:07:20.813 22:18:34 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.813 ************************************ 00:07:20.813 END TEST dd_malloc_copy 00:07:20.813 ************************************ 00:07:20.813 22:18:34 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:20.813 22:18:34 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:07:20.813 ************************************ 00:07:20.813 END TEST spdk_dd_malloc 00:07:20.813 ************************************ 00:07:20.813 00:07:20.813 real 0m7.488s 00:07:20.813 user 0m6.234s 00:07:20.813 sys 0m1.087s 00:07:20.813 22:18:34 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.813 22:18:34 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:20.813 22:18:34 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:20.813 22:18:34 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:20.813 22:18:34 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:20.813 22:18:34 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.813 22:18:34 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:20.813 ************************************ 00:07:20.813 START TEST spdk_dd_bdev_to_bdev 00:07:20.813 ************************************ 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:20.813 * Looking for test storage... 00:07:20.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:20.813 ************************************ 00:07:20.813 START TEST dd_inflate_file 00:07:20.813 ************************************ 00:07:20.813 22:18:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:21.072 [2024-07-15 22:18:34.473788] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:21.072 [2024-07-15 22:18:34.473865] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63937 ] 00:07:21.072 [2024-07-15 22:18:34.618821] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.331 [2024-07-15 22:18:34.768085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.331 [2024-07-15 22:18:34.841663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:21.928  Copying: 64/64 [MB] (average 1280 MBps) 00:07:21.928 00:07:21.928 00:07:21.928 real 0m0.828s 00:07:21.928 user 0m0.524s 00:07:21.928 sys 0m0.408s 00:07:21.928 22:18:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.928 22:18:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:21.928 ************************************ 00:07:21.928 END TEST dd_inflate_file 00:07:21.928 ************************************ 00:07:21.928 22:18:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:21.928 22:18:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:21.928 22:18:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:21.928 22:18:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:21.928 22:18:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:21.928 22:18:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:21.928 22:18:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:21.928 22:18:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:21.928 22:18:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.928 22:18:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:21.928 ************************************ 00:07:21.928 START TEST dd_copy_to_out_bdev 00:07:21.928 ************************************ 00:07:21.928 22:18:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:21.928 { 00:07:21.928 "subsystems": [ 00:07:21.928 { 00:07:21.928 "subsystem": "bdev", 00:07:21.928 "config": [ 00:07:21.928 { 00:07:21.928 "params": { 00:07:21.928 "trtype": "pcie", 00:07:21.928 "traddr": "0000:00:10.0", 00:07:21.928 "name": "Nvme0" 00:07:21.928 }, 00:07:21.928 "method": "bdev_nvme_attach_controller" 00:07:21.928 }, 00:07:21.928 { 00:07:21.928 "params": { 00:07:21.928 "trtype": "pcie", 00:07:21.928 "traddr": "0000:00:11.0", 00:07:21.928 "name": "Nvme1" 00:07:21.928 }, 00:07:21.928 "method": "bdev_nvme_attach_controller" 00:07:21.928 }, 00:07:21.928 { 00:07:21.928 "method": "bdev_wait_for_examine" 00:07:21.928 } 00:07:21.928 ] 00:07:21.928 } 00:07:21.928 ] 00:07:21.928 } 00:07:21.928 [2024-07-15 22:18:35.389133] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:21.928 [2024-07-15 22:18:35.389212] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63978 ] 00:07:21.928 [2024-07-15 22:18:35.532935] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.186 [2024-07-15 22:18:35.685855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.186 [2024-07-15 22:18:35.760935] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:24.129  Copying: 54/64 [MB] (54 MBps) Copying: 64/64 [MB] (average 55 MBps) 00:07:24.129 00:07:24.129 00:07:24.129 real 0m2.121s 00:07:24.129 user 0m1.820s 00:07:24.129 sys 0m1.626s 00:07:24.129 22:18:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.129 22:18:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:24.129 ************************************ 00:07:24.129 END TEST dd_copy_to_out_bdev 00:07:24.129 ************************************ 00:07:24.129 22:18:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:24.129 22:18:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:24.129 22:18:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:24.129 22:18:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:24.129 22:18:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.129 22:18:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:24.129 ************************************ 00:07:24.129 START TEST dd_offset_magic 00:07:24.129 ************************************ 00:07:24.129 22:18:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:07:24.129 22:18:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:24.129 22:18:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:24.129 22:18:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:24.129 22:18:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:24.129 22:18:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:24.129 22:18:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:24.129 22:18:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:24.129 22:18:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:24.129 [2024-07-15 22:18:37.589287] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:24.129 [2024-07-15 22:18:37.589383] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64021 ] 00:07:24.129 { 00:07:24.129 "subsystems": [ 00:07:24.129 { 00:07:24.129 "subsystem": "bdev", 00:07:24.129 "config": [ 00:07:24.129 { 00:07:24.129 "params": { 00:07:24.129 "trtype": "pcie", 00:07:24.129 "traddr": "0000:00:10.0", 00:07:24.129 "name": "Nvme0" 00:07:24.129 }, 00:07:24.129 "method": "bdev_nvme_attach_controller" 00:07:24.129 }, 00:07:24.129 { 00:07:24.129 "params": { 00:07:24.129 "trtype": "pcie", 00:07:24.129 "traddr": "0000:00:11.0", 00:07:24.129 "name": "Nvme1" 00:07:24.129 }, 00:07:24.129 "method": "bdev_nvme_attach_controller" 00:07:24.129 }, 00:07:24.129 { 00:07:24.129 "method": "bdev_wait_for_examine" 00:07:24.129 } 00:07:24.129 ] 00:07:24.129 } 00:07:24.129 ] 00:07:24.129 } 00:07:24.129 [2024-07-15 22:18:37.731623] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.387 [2024-07-15 22:18:37.884686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.387 [2024-07-15 22:18:37.961823] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:25.213  Copying: 65/65 [MB] (average 738 MBps) 00:07:25.213 00:07:25.213 22:18:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:25.213 22:18:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:25.213 22:18:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:25.213 22:18:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:25.213 [2024-07-15 22:18:38.648365] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:25.213 [2024-07-15 22:18:38.648466] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64041 ] 00:07:25.213 { 00:07:25.213 "subsystems": [ 00:07:25.213 { 00:07:25.213 "subsystem": "bdev", 00:07:25.213 "config": [ 00:07:25.213 { 00:07:25.213 "params": { 00:07:25.213 "trtype": "pcie", 00:07:25.213 "traddr": "0000:00:10.0", 00:07:25.213 "name": "Nvme0" 00:07:25.213 }, 00:07:25.213 "method": "bdev_nvme_attach_controller" 00:07:25.213 }, 00:07:25.213 { 00:07:25.213 "params": { 00:07:25.213 "trtype": "pcie", 00:07:25.213 "traddr": "0000:00:11.0", 00:07:25.213 "name": "Nvme1" 00:07:25.213 }, 00:07:25.213 "method": "bdev_nvme_attach_controller" 00:07:25.213 }, 00:07:25.213 { 00:07:25.213 "method": "bdev_wait_for_examine" 00:07:25.213 } 00:07:25.213 ] 00:07:25.213 } 00:07:25.213 ] 00:07:25.213 } 00:07:25.213 [2024-07-15 22:18:38.796831] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.472 [2024-07-15 22:18:38.951864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.472 [2024-07-15 22:18:39.029720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:25.988  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:25.988 00:07:25.988 22:18:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:25.988 22:18:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:25.988 22:18:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:25.988 22:18:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:25.988 22:18:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:25.988 22:18:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:25.988 22:18:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:25.988 [2024-07-15 22:18:39.587506] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:25.988 [2024-07-15 22:18:39.587837] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64063 ] 00:07:25.988 { 00:07:25.988 "subsystems": [ 00:07:25.988 { 00:07:25.988 "subsystem": "bdev", 00:07:25.988 "config": [ 00:07:25.988 { 00:07:25.988 "params": { 00:07:25.988 "trtype": "pcie", 00:07:25.988 "traddr": "0000:00:10.0", 00:07:25.988 "name": "Nvme0" 00:07:25.988 }, 00:07:25.988 "method": "bdev_nvme_attach_controller" 00:07:25.988 }, 00:07:25.988 { 00:07:25.988 "params": { 00:07:25.988 "trtype": "pcie", 00:07:25.988 "traddr": "0000:00:11.0", 00:07:25.988 "name": "Nvme1" 00:07:25.988 }, 00:07:25.988 "method": "bdev_nvme_attach_controller" 00:07:25.988 }, 00:07:25.988 { 00:07:25.988 "method": "bdev_wait_for_examine" 00:07:25.988 } 00:07:25.988 ] 00:07:25.988 } 00:07:25.988 ] 00:07:25.988 } 00:07:26.246 [2024-07-15 22:18:39.731100] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.504 [2024-07-15 22:18:39.883920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.504 [2024-07-15 22:18:39.960561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:27.020  Copying: 65/65 [MB] (average 773 MBps) 00:07:27.020 00:07:27.020 22:18:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:27.020 22:18:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:27.020 22:18:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:27.020 22:18:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:27.020 [2024-07-15 22:18:40.646770] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:27.020 [2024-07-15 22:18:40.646889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64083 ] 00:07:27.286 { 00:07:27.286 "subsystems": [ 00:07:27.286 { 00:07:27.286 "subsystem": "bdev", 00:07:27.286 "config": [ 00:07:27.286 { 00:07:27.286 "params": { 00:07:27.286 "trtype": "pcie", 00:07:27.286 "traddr": "0000:00:10.0", 00:07:27.286 "name": "Nvme0" 00:07:27.286 }, 00:07:27.286 "method": "bdev_nvme_attach_controller" 00:07:27.286 }, 00:07:27.286 { 00:07:27.286 "params": { 00:07:27.286 "trtype": "pcie", 00:07:27.286 "traddr": "0000:00:11.0", 00:07:27.286 "name": "Nvme1" 00:07:27.286 }, 00:07:27.286 "method": "bdev_nvme_attach_controller" 00:07:27.286 }, 00:07:27.286 { 00:07:27.286 "method": "bdev_wait_for_examine" 00:07:27.286 } 00:07:27.286 ] 00:07:27.286 } 00:07:27.286 ] 00:07:27.286 } 00:07:27.286 [2024-07-15 22:18:40.797877] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.547 [2024-07-15 22:18:40.952012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.547 [2024-07-15 22:18:41.028634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:28.077  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:28.077 00:07:28.077 ************************************ 00:07:28.077 END TEST dd_offset_magic 00:07:28.077 ************************************ 00:07:28.077 22:18:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:28.077 22:18:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:28.077 00:07:28.077 real 0m4.006s 00:07:28.077 user 0m2.914s 00:07:28.077 sys 0m1.261s 00:07:28.077 22:18:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.077 22:18:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:28.077 22:18:41 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:28.077 22:18:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:28.077 22:18:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:28.077 22:18:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:28.077 22:18:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:28.077 22:18:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:28.077 22:18:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:28.077 22:18:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:28.077 22:18:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:28.077 22:18:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:28.077 22:18:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:28.077 22:18:41 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:28.077 { 00:07:28.077 "subsystems": [ 00:07:28.077 { 00:07:28.077 "subsystem": "bdev", 00:07:28.077 "config": [ 00:07:28.077 { 00:07:28.077 "params": { 00:07:28.077 "trtype": "pcie", 00:07:28.077 "traddr": "0000:00:10.0", 00:07:28.077 "name": "Nvme0" 00:07:28.077 }, 00:07:28.077 "method": "bdev_nvme_attach_controller" 00:07:28.077 }, 00:07:28.077 { 00:07:28.077 "params": { 00:07:28.077 "trtype": "pcie", 00:07:28.077 "traddr": "0000:00:11.0", 00:07:28.077 "name": "Nvme1" 00:07:28.077 }, 00:07:28.077 "method": "bdev_nvme_attach_controller" 00:07:28.077 }, 00:07:28.077 { 00:07:28.077 "method": "bdev_wait_for_examine" 00:07:28.077 } 00:07:28.077 ] 00:07:28.077 } 00:07:28.077 ] 00:07:28.077 } 00:07:28.077 [2024-07-15 22:18:41.664505] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:28.077 [2024-07-15 22:18:41.664638] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64121 ] 00:07:28.352 [2024-07-15 22:18:41.815425] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.352 [2024-07-15 22:18:41.969594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.610 [2024-07-15 22:18:42.045560] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:29.177  Copying: 5120/5120 [kB] (average 1000 MBps) 00:07:29.177 00:07:29.177 22:18:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:29.177 22:18:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:29.177 22:18:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:29.177 22:18:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:29.177 22:18:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:29.177 22:18:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:29.177 22:18:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:29.177 22:18:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:29.177 22:18:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:29.177 22:18:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:29.177 [2024-07-15 22:18:42.607248] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:29.177 [2024-07-15 22:18:42.607565] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64142 ] 00:07:29.177 { 00:07:29.177 "subsystems": [ 00:07:29.177 { 00:07:29.177 "subsystem": "bdev", 00:07:29.177 "config": [ 00:07:29.177 { 00:07:29.177 "params": { 00:07:29.177 "trtype": "pcie", 00:07:29.177 "traddr": "0000:00:10.0", 00:07:29.177 "name": "Nvme0" 00:07:29.177 }, 00:07:29.177 "method": "bdev_nvme_attach_controller" 00:07:29.177 }, 00:07:29.177 { 00:07:29.177 "params": { 00:07:29.177 "trtype": "pcie", 00:07:29.177 "traddr": "0000:00:11.0", 00:07:29.177 "name": "Nvme1" 00:07:29.177 }, 00:07:29.177 "method": "bdev_nvme_attach_controller" 00:07:29.177 }, 00:07:29.177 { 00:07:29.177 "method": "bdev_wait_for_examine" 00:07:29.177 } 00:07:29.177 ] 00:07:29.177 } 00:07:29.177 ] 00:07:29.177 } 00:07:29.177 [2024-07-15 22:18:42.750161] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.435 [2024-07-15 22:18:42.905859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.435 [2024-07-15 22:18:42.981716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:29.954  Copying: 5120/5120 [kB] (average 833 MBps) 00:07:29.954 00:07:29.954 22:18:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:29.954 ************************************ 00:07:29.954 END TEST spdk_dd_bdev_to_bdev 00:07:29.954 ************************************ 00:07:29.954 00:07:29.954 real 0m9.255s 00:07:29.954 user 0m6.727s 00:07:29.954 sys 0m4.302s 00:07:29.954 22:18:43 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.954 22:18:43 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:29.954 22:18:43 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:29.954 22:18:43 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:29.954 22:18:43 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:29.954 22:18:43 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:29.954 22:18:43 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.954 22:18:43 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:30.212 ************************************ 00:07:30.212 START TEST spdk_dd_uring 00:07:30.212 ************************************ 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:30.212 * Looking for test storage... 00:07:30.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:30.212 ************************************ 00:07:30.212 START TEST dd_uring_copy 00:07:30.212 ************************************ 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=n49aed85sd754hptmu1bf6qqa0ouo4ut9eb7vls7yacynezerclza2tdsrgxqzh9k4delvd26ncpn1fwkb0s1n0gkokx6wc7wgk8siibg72wafi8v29i1r6dnftj0j1022j4rae69qrwf5gelzhyu3j22jz7kb3b2u4hqi3a25unkh2yzgryjozx64jjlilq2dyhfiwdc40rt7dq3ke8n1lll7ydi5y6hvv4xa6zu4d4to5yvbs8j2jl7mshntz73n388bqnxmqu0upo1lws66vr0tpgdmmj89ix14gpnw78yxe2do2w9w93nhxrag7suc04weieck78y3xiaphsjy1z970oupbmjrkptyd8rjzv558bn7vmusaokxrymam7yoqh8yfq5gky3uez34lu3jtu0eyb401wi5605rlztmfty4yuvrz92ylksciqxbix7ld0bcnagw7sxf20wa202d1mbykcy3mut9r280dbbq4fn41aptye34zg8zw423gvbxskhgge3bndnrs40y7a6tvyb9uvea7aasxgstmhw1arbv00o2akyxxz03d9wpaeuox95m9ykmvvxtl8yxazcgypzv2g6vba4cqlxxq6mty880hrwgbbklbbxwihcj0qm8to5fck4rn6aa7ihzr6p97xs66igdwrppm8yt8med1qbbfeo3q371nzn8n52xojcacvrnbvjjo9wzqi5y544xmstzw3zsu5qfjs9pn08fe8pb704sfgmpcoo9thu3qv0id4htcqq5cugjas0snzjuh3crv3ws7lbpenqcb9h57l51luab1o83raynj1x0ghtxbxjw3j1jbdhxz3o35844v936178mx5uuupyhxbq1hix85icg5kn03nw24a9pj9f1pq7bh7f7pt6ratycjhet4pww83y0z4ovc2c3orxwaavl34dkuvw2t841ocdx179dz7km16865yxgpmv3hkngzpahy1dmg9c9xsjh38oewxu1olai13k5z3xnn62h6c 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo n49aed85sd754hptmu1bf6qqa0ouo4ut9eb7vls7yacynezerclza2tdsrgxqzh9k4delvd26ncpn1fwkb0s1n0gkokx6wc7wgk8siibg72wafi8v29i1r6dnftj0j1022j4rae69qrwf5gelzhyu3j22jz7kb3b2u4hqi3a25unkh2yzgryjozx64jjlilq2dyhfiwdc40rt7dq3ke8n1lll7ydi5y6hvv4xa6zu4d4to5yvbs8j2jl7mshntz73n388bqnxmqu0upo1lws66vr0tpgdmmj89ix14gpnw78yxe2do2w9w93nhxrag7suc04weieck78y3xiaphsjy1z970oupbmjrkptyd8rjzv558bn7vmusaokxrymam7yoqh8yfq5gky3uez34lu3jtu0eyb401wi5605rlztmfty4yuvrz92ylksciqxbix7ld0bcnagw7sxf20wa202d1mbykcy3mut9r280dbbq4fn41aptye34zg8zw423gvbxskhgge3bndnrs40y7a6tvyb9uvea7aasxgstmhw1arbv00o2akyxxz03d9wpaeuox95m9ykmvvxtl8yxazcgypzv2g6vba4cqlxxq6mty880hrwgbbklbbxwihcj0qm8to5fck4rn6aa7ihzr6p97xs66igdwrppm8yt8med1qbbfeo3q371nzn8n52xojcacvrnbvjjo9wzqi5y544xmstzw3zsu5qfjs9pn08fe8pb704sfgmpcoo9thu3qv0id4htcqq5cugjas0snzjuh3crv3ws7lbpenqcb9h57l51luab1o83raynj1x0ghtxbxjw3j1jbdhxz3o35844v936178mx5uuupyhxbq1hix85icg5kn03nw24a9pj9f1pq7bh7f7pt6ratycjhet4pww83y0z4ovc2c3orxwaavl34dkuvw2t841ocdx179dz7km16865yxgpmv3hkngzpahy1dmg9c9xsjh38oewxu1olai13k5z3xnn62h6c 00:07:30.212 22:18:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:30.212 [2024-07-15 22:18:43.808535] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:30.212 [2024-07-15 22:18:43.809008] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64212 ] 00:07:30.470 [2024-07-15 22:18:43.954160] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.728 [2024-07-15 22:18:44.111533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.728 [2024-07-15 22:18:44.188149] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:31.898  Copying: 511/511 [MB] (average 1514 MBps) 00:07:31.898 00:07:31.898 22:18:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:31.898 22:18:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:31.898 22:18:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:31.898 22:18:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:31.898 [2024-07-15 22:18:45.483273] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:31.898 [2024-07-15 22:18:45.483865] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64234 ] 00:07:31.898 { 00:07:31.898 "subsystems": [ 00:07:31.898 { 00:07:31.898 "subsystem": "bdev", 00:07:31.898 "config": [ 00:07:31.898 { 00:07:31.898 "params": { 00:07:31.898 "block_size": 512, 00:07:31.898 "num_blocks": 1048576, 00:07:31.898 "name": "malloc0" 00:07:31.898 }, 00:07:31.898 "method": "bdev_malloc_create" 00:07:31.898 }, 00:07:31.898 { 00:07:31.898 "params": { 00:07:31.898 "filename": "/dev/zram1", 00:07:31.898 "name": "uring0" 00:07:31.898 }, 00:07:31.898 "method": "bdev_uring_create" 00:07:31.898 }, 00:07:31.898 { 00:07:31.898 "method": "bdev_wait_for_examine" 00:07:31.898 } 00:07:31.898 ] 00:07:31.898 } 00:07:31.898 ] 00:07:31.898 } 00:07:32.157 [2024-07-15 22:18:45.637652] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.157 [2024-07-15 22:18:45.787507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.416 [2024-07-15 22:18:45.861310] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:35.296  Copying: 249/512 [MB] (249 MBps) Copying: 495/512 [MB] (245 MBps) Copying: 512/512 [MB] (average 247 MBps) 00:07:35.296 00:07:35.296 22:18:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:35.296 22:18:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:35.296 22:18:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:35.296 22:18:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:35.296 [2024-07-15 22:18:48.819532] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:35.296 [2024-07-15 22:18:48.819860] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64284 ] 00:07:35.296 { 00:07:35.296 "subsystems": [ 00:07:35.296 { 00:07:35.296 "subsystem": "bdev", 00:07:35.296 "config": [ 00:07:35.296 { 00:07:35.296 "params": { 00:07:35.296 "block_size": 512, 00:07:35.296 "num_blocks": 1048576, 00:07:35.296 "name": "malloc0" 00:07:35.296 }, 00:07:35.296 "method": "bdev_malloc_create" 00:07:35.296 }, 00:07:35.296 { 00:07:35.296 "params": { 00:07:35.296 "filename": "/dev/zram1", 00:07:35.296 "name": "uring0" 00:07:35.296 }, 00:07:35.296 "method": "bdev_uring_create" 00:07:35.296 }, 00:07:35.296 { 00:07:35.296 "method": "bdev_wait_for_examine" 00:07:35.296 } 00:07:35.296 ] 00:07:35.296 } 00:07:35.296 ] 00:07:35.296 } 00:07:35.555 [2024-07-15 22:18:48.963562] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.555 [2024-07-15 22:18:49.113132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.555 [2024-07-15 22:18:49.187384] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:39.801  Copying: 172/512 [MB] (172 MBps) Copying: 343/512 [MB] (171 MBps) Copying: 497/512 [MB] (154 MBps) Copying: 512/512 [MB] (average 166 MBps) 00:07:39.801 00:07:39.801 22:18:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:39.801 22:18:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ n49aed85sd754hptmu1bf6qqa0ouo4ut9eb7vls7yacynezerclza2tdsrgxqzh9k4delvd26ncpn1fwkb0s1n0gkokx6wc7wgk8siibg72wafi8v29i1r6dnftj0j1022j4rae69qrwf5gelzhyu3j22jz7kb3b2u4hqi3a25unkh2yzgryjozx64jjlilq2dyhfiwdc40rt7dq3ke8n1lll7ydi5y6hvv4xa6zu4d4to5yvbs8j2jl7mshntz73n388bqnxmqu0upo1lws66vr0tpgdmmj89ix14gpnw78yxe2do2w9w93nhxrag7suc04weieck78y3xiaphsjy1z970oupbmjrkptyd8rjzv558bn7vmusaokxrymam7yoqh8yfq5gky3uez34lu3jtu0eyb401wi5605rlztmfty4yuvrz92ylksciqxbix7ld0bcnagw7sxf20wa202d1mbykcy3mut9r280dbbq4fn41aptye34zg8zw423gvbxskhgge3bndnrs40y7a6tvyb9uvea7aasxgstmhw1arbv00o2akyxxz03d9wpaeuox95m9ykmvvxtl8yxazcgypzv2g6vba4cqlxxq6mty880hrwgbbklbbxwihcj0qm8to5fck4rn6aa7ihzr6p97xs66igdwrppm8yt8med1qbbfeo3q371nzn8n52xojcacvrnbvjjo9wzqi5y544xmstzw3zsu5qfjs9pn08fe8pb704sfgmpcoo9thu3qv0id4htcqq5cugjas0snzjuh3crv3ws7lbpenqcb9h57l51luab1o83raynj1x0ghtxbxjw3j1jbdhxz3o35844v936178mx5uuupyhxbq1hix85icg5kn03nw24a9pj9f1pq7bh7f7pt6ratycjhet4pww83y0z4ovc2c3orxwaavl34dkuvw2t841ocdx179dz7km16865yxgpmv3hkngzpahy1dmg9c9xsjh38oewxu1olai13k5z3xnn62h6c == \n\4\9\a\e\d\8\5\s\d\7\5\4\h\p\t\m\u\1\b\f\6\q\q\a\0\o\u\o\4\u\t\9\e\b\7\v\l\s\7\y\a\c\y\n\e\z\e\r\c\l\z\a\2\t\d\s\r\g\x\q\z\h\9\k\4\d\e\l\v\d\2\6\n\c\p\n\1\f\w\k\b\0\s\1\n\0\g\k\o\k\x\6\w\c\7\w\g\k\8\s\i\i\b\g\7\2\w\a\f\i\8\v\2\9\i\1\r\6\d\n\f\t\j\0\j\1\0\2\2\j\4\r\a\e\6\9\q\r\w\f\5\g\e\l\z\h\y\u\3\j\2\2\j\z\7\k\b\3\b\2\u\4\h\q\i\3\a\2\5\u\n\k\h\2\y\z\g\r\y\j\o\z\x\6\4\j\j\l\i\l\q\2\d\y\h\f\i\w\d\c\4\0\r\t\7\d\q\3\k\e\8\n\1\l\l\l\7\y\d\i\5\y\6\h\v\v\4\x\a\6\z\u\4\d\4\t\o\5\y\v\b\s\8\j\2\j\l\7\m\s\h\n\t\z\7\3\n\3\8\8\b\q\n\x\m\q\u\0\u\p\o\1\l\w\s\6\6\v\r\0\t\p\g\d\m\m\j\8\9\i\x\1\4\g\p\n\w\7\8\y\x\e\2\d\o\2\w\9\w\9\3\n\h\x\r\a\g\7\s\u\c\0\4\w\e\i\e\c\k\7\8\y\3\x\i\a\p\h\s\j\y\1\z\9\7\0\o\u\p\b\m\j\r\k\p\t\y\d\8\r\j\z\v\5\5\8\b\n\7\v\m\u\s\a\o\k\x\r\y\m\a\m\7\y\o\q\h\8\y\f\q\5\g\k\y\3\u\e\z\3\4\l\u\3\j\t\u\0\e\y\b\4\0\1\w\i\5\6\0\5\r\l\z\t\m\f\t\y\4\y\u\v\r\z\9\2\y\l\k\s\c\i\q\x\b\i\x\7\l\d\0\b\c\n\a\g\w\7\s\x\f\2\0\w\a\2\0\2\d\1\m\b\y\k\c\y\3\m\u\t\9\r\2\8\0\d\b\b\q\4\f\n\4\1\a\p\t\y\e\3\4\z\g\8\z\w\4\2\3\g\v\b\x\s\k\h\g\g\e\3\b\n\d\n\r\s\4\0\y\7\a\6\t\v\y\b\9\u\v\e\a\7\a\a\s\x\g\s\t\m\h\w\1\a\r\b\v\0\0\o\2\a\k\y\x\x\z\0\3\d\9\w\p\a\e\u\o\x\9\5\m\9\y\k\m\v\v\x\t\l\8\y\x\a\z\c\g\y\p\z\v\2\g\6\v\b\a\4\c\q\l\x\x\q\6\m\t\y\8\8\0\h\r\w\g\b\b\k\l\b\b\x\w\i\h\c\j\0\q\m\8\t\o\5\f\c\k\4\r\n\6\a\a\7\i\h\z\r\6\p\9\7\x\s\6\6\i\g\d\w\r\p\p\m\8\y\t\8\m\e\d\1\q\b\b\f\e\o\3\q\3\7\1\n\z\n\8\n\5\2\x\o\j\c\a\c\v\r\n\b\v\j\j\o\9\w\z\q\i\5\y\5\4\4\x\m\s\t\z\w\3\z\s\u\5\q\f\j\s\9\p\n\0\8\f\e\8\p\b\7\0\4\s\f\g\m\p\c\o\o\9\t\h\u\3\q\v\0\i\d\4\h\t\c\q\q\5\c\u\g\j\a\s\0\s\n\z\j\u\h\3\c\r\v\3\w\s\7\l\b\p\e\n\q\c\b\9\h\5\7\l\5\1\l\u\a\b\1\o\8\3\r\a\y\n\j\1\x\0\g\h\t\x\b\x\j\w\3\j\1\j\b\d\h\x\z\3\o\3\5\8\4\4\v\9\3\6\1\7\8\m\x\5\u\u\u\p\y\h\x\b\q\1\h\i\x\8\5\i\c\g\5\k\n\0\3\n\w\2\4\a\9\p\j\9\f\1\p\q\7\b\h\7\f\7\p\t\6\r\a\t\y\c\j\h\e\t\4\p\w\w\8\3\y\0\z\4\o\v\c\2\c\3\o\r\x\w\a\a\v\l\3\4\d\k\u\v\w\2\t\8\4\1\o\c\d\x\1\7\9\d\z\7\k\m\1\6\8\6\5\y\x\g\p\m\v\3\h\k\n\g\z\p\a\h\y\1\d\m\g\9\c\9\x\s\j\h\3\8\o\e\w\x\u\1\o\l\a\i\1\3\k\5\z\3\x\n\n\6\2\h\6\c ]] 00:07:39.801 22:18:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:39.801 22:18:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ n49aed85sd754hptmu1bf6qqa0ouo4ut9eb7vls7yacynezerclza2tdsrgxqzh9k4delvd26ncpn1fwkb0s1n0gkokx6wc7wgk8siibg72wafi8v29i1r6dnftj0j1022j4rae69qrwf5gelzhyu3j22jz7kb3b2u4hqi3a25unkh2yzgryjozx64jjlilq2dyhfiwdc40rt7dq3ke8n1lll7ydi5y6hvv4xa6zu4d4to5yvbs8j2jl7mshntz73n388bqnxmqu0upo1lws66vr0tpgdmmj89ix14gpnw78yxe2do2w9w93nhxrag7suc04weieck78y3xiaphsjy1z970oupbmjrkptyd8rjzv558bn7vmusaokxrymam7yoqh8yfq5gky3uez34lu3jtu0eyb401wi5605rlztmfty4yuvrz92ylksciqxbix7ld0bcnagw7sxf20wa202d1mbykcy3mut9r280dbbq4fn41aptye34zg8zw423gvbxskhgge3bndnrs40y7a6tvyb9uvea7aasxgstmhw1arbv00o2akyxxz03d9wpaeuox95m9ykmvvxtl8yxazcgypzv2g6vba4cqlxxq6mty880hrwgbbklbbxwihcj0qm8to5fck4rn6aa7ihzr6p97xs66igdwrppm8yt8med1qbbfeo3q371nzn8n52xojcacvrnbvjjo9wzqi5y544xmstzw3zsu5qfjs9pn08fe8pb704sfgmpcoo9thu3qv0id4htcqq5cugjas0snzjuh3crv3ws7lbpenqcb9h57l51luab1o83raynj1x0ghtxbxjw3j1jbdhxz3o35844v936178mx5uuupyhxbq1hix85icg5kn03nw24a9pj9f1pq7bh7f7pt6ratycjhet4pww83y0z4ovc2c3orxwaavl34dkuvw2t841ocdx179dz7km16865yxgpmv3hkngzpahy1dmg9c9xsjh38oewxu1olai13k5z3xnn62h6c == \n\4\9\a\e\d\8\5\s\d\7\5\4\h\p\t\m\u\1\b\f\6\q\q\a\0\o\u\o\4\u\t\9\e\b\7\v\l\s\7\y\a\c\y\n\e\z\e\r\c\l\z\a\2\t\d\s\r\g\x\q\z\h\9\k\4\d\e\l\v\d\2\6\n\c\p\n\1\f\w\k\b\0\s\1\n\0\g\k\o\k\x\6\w\c\7\w\g\k\8\s\i\i\b\g\7\2\w\a\f\i\8\v\2\9\i\1\r\6\d\n\f\t\j\0\j\1\0\2\2\j\4\r\a\e\6\9\q\r\w\f\5\g\e\l\z\h\y\u\3\j\2\2\j\z\7\k\b\3\b\2\u\4\h\q\i\3\a\2\5\u\n\k\h\2\y\z\g\r\y\j\o\z\x\6\4\j\j\l\i\l\q\2\d\y\h\f\i\w\d\c\4\0\r\t\7\d\q\3\k\e\8\n\1\l\l\l\7\y\d\i\5\y\6\h\v\v\4\x\a\6\z\u\4\d\4\t\o\5\y\v\b\s\8\j\2\j\l\7\m\s\h\n\t\z\7\3\n\3\8\8\b\q\n\x\m\q\u\0\u\p\o\1\l\w\s\6\6\v\r\0\t\p\g\d\m\m\j\8\9\i\x\1\4\g\p\n\w\7\8\y\x\e\2\d\o\2\w\9\w\9\3\n\h\x\r\a\g\7\s\u\c\0\4\w\e\i\e\c\k\7\8\y\3\x\i\a\p\h\s\j\y\1\z\9\7\0\o\u\p\b\m\j\r\k\p\t\y\d\8\r\j\z\v\5\5\8\b\n\7\v\m\u\s\a\o\k\x\r\y\m\a\m\7\y\o\q\h\8\y\f\q\5\g\k\y\3\u\e\z\3\4\l\u\3\j\t\u\0\e\y\b\4\0\1\w\i\5\6\0\5\r\l\z\t\m\f\t\y\4\y\u\v\r\z\9\2\y\l\k\s\c\i\q\x\b\i\x\7\l\d\0\b\c\n\a\g\w\7\s\x\f\2\0\w\a\2\0\2\d\1\m\b\y\k\c\y\3\m\u\t\9\r\2\8\0\d\b\b\q\4\f\n\4\1\a\p\t\y\e\3\4\z\g\8\z\w\4\2\3\g\v\b\x\s\k\h\g\g\e\3\b\n\d\n\r\s\4\0\y\7\a\6\t\v\y\b\9\u\v\e\a\7\a\a\s\x\g\s\t\m\h\w\1\a\r\b\v\0\0\o\2\a\k\y\x\x\z\0\3\d\9\w\p\a\e\u\o\x\9\5\m\9\y\k\m\v\v\x\t\l\8\y\x\a\z\c\g\y\p\z\v\2\g\6\v\b\a\4\c\q\l\x\x\q\6\m\t\y\8\8\0\h\r\w\g\b\b\k\l\b\b\x\w\i\h\c\j\0\q\m\8\t\o\5\f\c\k\4\r\n\6\a\a\7\i\h\z\r\6\p\9\7\x\s\6\6\i\g\d\w\r\p\p\m\8\y\t\8\m\e\d\1\q\b\b\f\e\o\3\q\3\7\1\n\z\n\8\n\5\2\x\o\j\c\a\c\v\r\n\b\v\j\j\o\9\w\z\q\i\5\y\5\4\4\x\m\s\t\z\w\3\z\s\u\5\q\f\j\s\9\p\n\0\8\f\e\8\p\b\7\0\4\s\f\g\m\p\c\o\o\9\t\h\u\3\q\v\0\i\d\4\h\t\c\q\q\5\c\u\g\j\a\s\0\s\n\z\j\u\h\3\c\r\v\3\w\s\7\l\b\p\e\n\q\c\b\9\h\5\7\l\5\1\l\u\a\b\1\o\8\3\r\a\y\n\j\1\x\0\g\h\t\x\b\x\j\w\3\j\1\j\b\d\h\x\z\3\o\3\5\8\4\4\v\9\3\6\1\7\8\m\x\5\u\u\u\p\y\h\x\b\q\1\h\i\x\8\5\i\c\g\5\k\n\0\3\n\w\2\4\a\9\p\j\9\f\1\p\q\7\b\h\7\f\7\p\t\6\r\a\t\y\c\j\h\e\t\4\p\w\w\8\3\y\0\z\4\o\v\c\2\c\3\o\r\x\w\a\a\v\l\3\4\d\k\u\v\w\2\t\8\4\1\o\c\d\x\1\7\9\d\z\7\k\m\1\6\8\6\5\y\x\g\p\m\v\3\h\k\n\g\z\p\a\h\y\1\d\m\g\9\c\9\x\s\j\h\3\8\o\e\w\x\u\1\o\l\a\i\1\3\k\5\z\3\x\n\n\6\2\h\6\c ]] 00:07:39.801 22:18:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:40.057 22:18:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:40.057 22:18:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:40.057 22:18:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:40.057 22:18:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:40.057 [2024-07-15 22:18:53.609689] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:40.057 [2024-07-15 22:18:53.609770] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64371 ] 00:07:40.057 { 00:07:40.057 "subsystems": [ 00:07:40.057 { 00:07:40.057 "subsystem": "bdev", 00:07:40.057 "config": [ 00:07:40.057 { 00:07:40.057 "params": { 00:07:40.057 "block_size": 512, 00:07:40.057 "num_blocks": 1048576, 00:07:40.057 "name": "malloc0" 00:07:40.057 }, 00:07:40.057 "method": "bdev_malloc_create" 00:07:40.057 }, 00:07:40.057 { 00:07:40.057 "params": { 00:07:40.057 "filename": "/dev/zram1", 00:07:40.057 "name": "uring0" 00:07:40.057 }, 00:07:40.057 "method": "bdev_uring_create" 00:07:40.057 }, 00:07:40.057 { 00:07:40.057 "method": "bdev_wait_for_examine" 00:07:40.057 } 00:07:40.057 ] 00:07:40.057 } 00:07:40.057 ] 00:07:40.057 } 00:07:40.315 [2024-07-15 22:18:53.753460] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.315 [2024-07-15 22:18:53.908538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.573 [2024-07-15 22:18:53.983183] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:44.015  Copying: 191/512 [MB] (191 MBps) Copying: 382/512 [MB] (191 MBps) Copying: 512/512 [MB] (average 191 MBps) 00:07:44.015 00:07:44.015 22:18:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:44.015 22:18:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:44.015 22:18:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:44.015 22:18:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:44.015 22:18:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:44.015 22:18:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:44.015 22:18:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:44.015 22:18:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:44.015 [2024-07-15 22:18:57.567047] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:44.015 [2024-07-15 22:18:57.567371] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64427 ] 00:07:44.015 { 00:07:44.015 "subsystems": [ 00:07:44.015 { 00:07:44.015 "subsystem": "bdev", 00:07:44.015 "config": [ 00:07:44.015 { 00:07:44.015 "params": { 00:07:44.015 "block_size": 512, 00:07:44.015 "num_blocks": 1048576, 00:07:44.015 "name": "malloc0" 00:07:44.015 }, 00:07:44.015 "method": "bdev_malloc_create" 00:07:44.015 }, 00:07:44.015 { 00:07:44.015 "params": { 00:07:44.015 "filename": "/dev/zram1", 00:07:44.015 "name": "uring0" 00:07:44.015 }, 00:07:44.015 "method": "bdev_uring_create" 00:07:44.015 }, 00:07:44.015 { 00:07:44.015 "params": { 00:07:44.015 "name": "uring0" 00:07:44.015 }, 00:07:44.015 "method": "bdev_uring_delete" 00:07:44.015 }, 00:07:44.015 { 00:07:44.015 "method": "bdev_wait_for_examine" 00:07:44.015 } 00:07:44.015 ] 00:07:44.015 } 00:07:44.015 ] 00:07:44.015 } 00:07:44.272 [2024-07-15 22:18:57.710379] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.272 [2024-07-15 22:18:57.866750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.531 [2024-07-15 22:18:57.943025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:45.359  Copying: 0/0 [B] (average 0 Bps) 00:07:45.359 00:07:45.359 22:18:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:45.359 22:18:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:07:45.359 22:18:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:45.359 22:18:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.359 22:18:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:45.359 22:18:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:45.359 22:18:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.359 22:18:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.359 22:18:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:45.359 22:18:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:45.359 22:18:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.359 22:18:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.359 22:18:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.359 22:18:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.359 22:18:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:45.359 22:18:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:45.359 [2024-07-15 22:18:58.872374] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:45.359 [2024-07-15 22:18:58.872458] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64455 ] 00:07:45.359 { 00:07:45.359 "subsystems": [ 00:07:45.359 { 00:07:45.359 "subsystem": "bdev", 00:07:45.359 "config": [ 00:07:45.359 { 00:07:45.359 "params": { 00:07:45.359 "block_size": 512, 00:07:45.359 "num_blocks": 1048576, 00:07:45.359 "name": "malloc0" 00:07:45.359 }, 00:07:45.359 "method": "bdev_malloc_create" 00:07:45.359 }, 00:07:45.359 { 00:07:45.359 "params": { 00:07:45.359 "filename": "/dev/zram1", 00:07:45.359 "name": "uring0" 00:07:45.359 }, 00:07:45.359 "method": "bdev_uring_create" 00:07:45.359 }, 00:07:45.359 { 00:07:45.359 "params": { 00:07:45.359 "name": "uring0" 00:07:45.359 }, 00:07:45.359 "method": "bdev_uring_delete" 00:07:45.359 }, 00:07:45.359 { 00:07:45.359 "method": "bdev_wait_for_examine" 00:07:45.359 } 00:07:45.359 ] 00:07:45.359 } 00:07:45.359 ] 00:07:45.359 } 00:07:45.618 [2024-07-15 22:18:59.014428] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.618 [2024-07-15 22:18:59.167773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.618 [2024-07-15 22:18:59.245090] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:46.186 [2024-07-15 22:18:59.514992] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:46.186 [2024-07-15 22:18:59.515052] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:46.186 [2024-07-15 22:18:59.515062] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:46.186 [2024-07-15 22:18:59.515073] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.444 [2024-07-15 22:18:59.956582] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:46.702 22:19:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:07:46.702 22:19:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:46.702 22:19:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:07:46.702 22:19:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:07:46.702 22:19:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:07:46.702 22:19:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:46.702 22:19:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:46.702 22:19:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:07:46.702 22:19:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:07:46.702 22:19:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:07:46.702 22:19:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:07:46.702 22:19:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:46.962 00:07:46.962 real 0m16.629s 00:07:46.962 user 0m11.131s 00:07:46.962 sys 0m13.142s 00:07:46.962 ************************************ 00:07:46.962 END TEST dd_uring_copy 00:07:46.962 ************************************ 00:07:46.962 22:19:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.962 22:19:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:46.962 22:19:00 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 00:07:46.962 ************************************ 00:07:46.962 END TEST spdk_dd_uring 00:07:46.962 ************************************ 00:07:46.962 00:07:46.962 real 0m16.828s 00:07:46.962 user 0m11.212s 00:07:46.962 sys 0m13.265s 00:07:46.962 22:19:00 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.962 22:19:00 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:46.962 22:19:00 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:46.962 22:19:00 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:46.962 22:19:00 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:46.962 22:19:00 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.962 22:19:00 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:46.962 ************************************ 00:07:46.962 START TEST spdk_dd_sparse 00:07:46.962 ************************************ 00:07:46.962 22:19:00 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:47.222 * Looking for test storage... 00:07:47.222 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:47.222 1+0 records in 00:07:47.222 1+0 records out 00:07:47.222 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00527849 s, 795 MB/s 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:47.222 1+0 records in 00:07:47.222 1+0 records out 00:07:47.222 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00464724 s, 903 MB/s 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:47.222 1+0 records in 00:07:47.222 1+0 records out 00:07:47.222 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00976012 s, 430 MB/s 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:47.222 ************************************ 00:07:47.222 START TEST dd_sparse_file_to_file 00:07:47.222 ************************************ 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:47.222 22:19:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:47.222 [2024-07-15 22:19:00.722525] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:47.222 [2024-07-15 22:19:00.722642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64553 ] 00:07:47.222 { 00:07:47.222 "subsystems": [ 00:07:47.222 { 00:07:47.222 "subsystem": "bdev", 00:07:47.222 "config": [ 00:07:47.222 { 00:07:47.222 "params": { 00:07:47.222 "block_size": 4096, 00:07:47.222 "filename": "dd_sparse_aio_disk", 00:07:47.222 "name": "dd_aio" 00:07:47.223 }, 00:07:47.223 "method": "bdev_aio_create" 00:07:47.223 }, 00:07:47.223 { 00:07:47.223 "params": { 00:07:47.223 "lvs_name": "dd_lvstore", 00:07:47.223 "bdev_name": "dd_aio" 00:07:47.223 }, 00:07:47.223 "method": "bdev_lvol_create_lvstore" 00:07:47.223 }, 00:07:47.223 { 00:07:47.223 "method": "bdev_wait_for_examine" 00:07:47.223 } 00:07:47.223 ] 00:07:47.223 } 00:07:47.223 ] 00:07:47.223 } 00:07:47.482 [2024-07-15 22:19:00.869216] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.482 [2024-07-15 22:19:01.017467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.482 [2024-07-15 22:19:01.090535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:48.000  Copying: 12/36 [MB] (average 666 MBps) 00:07:48.000 00:07:48.000 22:19:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:48.000 22:19:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:48.000 22:19:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:48.000 22:19:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:48.000 22:19:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:48.000 22:19:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:48.001 22:19:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:48.001 22:19:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:48.001 22:19:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:48.001 22:19:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:48.001 00:07:48.001 real 0m0.868s 00:07:48.001 user 0m0.565s 00:07:48.001 sys 0m0.454s 00:07:48.001 22:19:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.001 22:19:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:48.001 ************************************ 00:07:48.001 END TEST dd_sparse_file_to_file 00:07:48.001 ************************************ 00:07:48.001 22:19:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:48.001 22:19:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:48.001 22:19:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:48.001 22:19:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.001 22:19:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:48.001 ************************************ 00:07:48.001 START TEST dd_sparse_file_to_bdev 00:07:48.001 ************************************ 00:07:48.001 22:19:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:07:48.001 22:19:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:48.001 22:19:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:48.001 22:19:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:48.001 22:19:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:48.001 22:19:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:48.001 22:19:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:48.001 22:19:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:48.001 22:19:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:48.260 { 00:07:48.260 "subsystems": [ 00:07:48.260 { 00:07:48.260 "subsystem": "bdev", 00:07:48.260 "config": [ 00:07:48.260 { 00:07:48.260 "params": { 00:07:48.260 "block_size": 4096, 00:07:48.260 "filename": "dd_sparse_aio_disk", 00:07:48.260 "name": "dd_aio" 00:07:48.260 }, 00:07:48.260 "method": "bdev_aio_create" 00:07:48.260 }, 00:07:48.260 { 00:07:48.260 "params": { 00:07:48.260 "lvs_name": "dd_lvstore", 00:07:48.260 "lvol_name": "dd_lvol", 00:07:48.260 "size_in_mib": 36, 00:07:48.260 "thin_provision": true 00:07:48.260 }, 00:07:48.260 "method": "bdev_lvol_create" 00:07:48.260 }, 00:07:48.260 { 00:07:48.260 "method": "bdev_wait_for_examine" 00:07:48.260 } 00:07:48.260 ] 00:07:48.260 } 00:07:48.260 ] 00:07:48.260 } 00:07:48.260 [2024-07-15 22:19:01.654651] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:48.260 [2024-07-15 22:19:01.654722] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64600 ] 00:07:48.260 [2024-07-15 22:19:01.795864] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.519 [2024-07-15 22:19:01.944126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.519 [2024-07-15 22:19:02.016577] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:49.086  Copying: 12/36 [MB] (average 428 MBps) 00:07:49.086 00:07:49.086 00:07:49.086 real 0m0.892s 00:07:49.086 user 0m0.578s 00:07:49.086 sys 0m0.452s 00:07:49.086 22:19:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.086 22:19:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:49.086 ************************************ 00:07:49.086 END TEST dd_sparse_file_to_bdev 00:07:49.086 ************************************ 00:07:49.086 22:19:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:49.086 22:19:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:49.086 22:19:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:49.086 22:19:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.086 22:19:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:49.086 ************************************ 00:07:49.086 START TEST dd_sparse_bdev_to_file 00:07:49.086 ************************************ 00:07:49.086 22:19:02 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:07:49.086 22:19:02 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:49.086 22:19:02 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:49.086 22:19:02 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:49.086 22:19:02 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:49.086 22:19:02 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:49.086 22:19:02 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:49.086 22:19:02 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:49.086 22:19:02 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:49.086 [2024-07-15 22:19:02.618182] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:49.086 [2024-07-15 22:19:02.618258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64633 ] 00:07:49.086 { 00:07:49.086 "subsystems": [ 00:07:49.086 { 00:07:49.086 "subsystem": "bdev", 00:07:49.086 "config": [ 00:07:49.086 { 00:07:49.086 "params": { 00:07:49.087 "block_size": 4096, 00:07:49.087 "filename": "dd_sparse_aio_disk", 00:07:49.087 "name": "dd_aio" 00:07:49.087 }, 00:07:49.087 "method": "bdev_aio_create" 00:07:49.087 }, 00:07:49.087 { 00:07:49.087 "method": "bdev_wait_for_examine" 00:07:49.087 } 00:07:49.087 ] 00:07:49.087 } 00:07:49.087 ] 00:07:49.087 } 00:07:49.345 [2024-07-15 22:19:02.760630] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.345 [2024-07-15 22:19:02.910351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.604 [2024-07-15 22:19:02.983888] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:49.862  Copying: 12/36 [MB] (average 666 MBps) 00:07:49.862 00:07:49.862 22:19:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:49.862 22:19:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:49.862 22:19:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:49.862 22:19:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:49.862 22:19:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:49.862 22:19:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:49.862 22:19:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:49.862 22:19:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:49.862 22:19:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:49.862 22:19:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:49.862 00:07:49.863 real 0m0.862s 00:07:49.863 user 0m0.553s 00:07:49.863 sys 0m0.454s 00:07:49.863 22:19:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.863 22:19:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:49.863 ************************************ 00:07:49.863 END TEST dd_sparse_bdev_to_file 00:07:49.863 ************************************ 00:07:49.863 22:19:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:49.863 22:19:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:49.863 22:19:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:49.863 22:19:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:50.120 22:19:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:50.120 22:19:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:50.120 00:07:50.120 real 0m3.029s 00:07:50.120 user 0m1.824s 00:07:50.120 sys 0m1.643s 00:07:50.120 ************************************ 00:07:50.120 END TEST spdk_dd_sparse 00:07:50.120 ************************************ 00:07:50.120 22:19:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.120 22:19:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:50.120 22:19:03 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:50.120 22:19:03 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:50.120 22:19:03 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.120 22:19:03 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.121 22:19:03 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:50.121 ************************************ 00:07:50.121 START TEST spdk_dd_negative 00:07:50.121 ************************************ 00:07:50.121 22:19:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:50.121 * Looking for test storage... 00:07:50.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:50.121 22:19:03 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:50.121 22:19:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.121 22:19:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.121 22:19:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.121 22:19:03 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.121 22:19:03 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.121 22:19:03 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.121 22:19:03 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:50.121 22:19:03 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.121 22:19:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.121 22:19:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.121 22:19:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.121 22:19:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.121 22:19:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:50.121 22:19:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.121 22:19:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.121 22:19:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:50.379 ************************************ 00:07:50.379 START TEST dd_invalid_arguments 00:07:50.379 ************************************ 00:07:50.379 22:19:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:07:50.379 22:19:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:50.379 22:19:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:07:50.379 22:19:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:50.379 22:19:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.379 22:19:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.379 22:19:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.379 22:19:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.379 22:19:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.379 22:19:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.379 22:19:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.379 22:19:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:50.379 22:19:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:50.379 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:50.379 00:07:50.379 CPU options: 00:07:50.379 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:50.379 (like [0,1,10]) 00:07:50.379 --lcores lcore to CPU mapping list. The list is in the format: 00:07:50.379 [<,lcores[@CPUs]>...] 00:07:50.379 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:50.379 Within the group, '-' is used for range separator, 00:07:50.379 ',' is used for single number separator. 00:07:50.379 '( )' can be omitted for single element group, 00:07:50.379 '@' can be omitted if cpus and lcores have the same value 00:07:50.379 --disable-cpumask-locks Disable CPU core lock files. 00:07:50.379 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:50.379 pollers in the app support interrupt mode) 00:07:50.379 -p, --main-core main (primary) core for DPDK 00:07:50.379 00:07:50.379 Configuration options: 00:07:50.379 -c, --config, --json JSON config file 00:07:50.379 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:50.379 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:50.379 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:50.379 --rpcs-allowed comma-separated list of permitted RPCS 00:07:50.379 --json-ignore-init-errors don't exit on invalid config entry 00:07:50.379 00:07:50.379 Memory options: 00:07:50.379 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:50.379 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:50.379 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:50.379 -R, --huge-unlink unlink huge files after initialization 00:07:50.379 -n, --mem-channels number of memory channels used for DPDK 00:07:50.379 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:50.379 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:50.379 --no-huge run without using hugepages 00:07:50.379 --enforce-numa enforce NUMA allocations from the correct socket 00:07:50.379 -i, --shm-id shared memory ID (optional) 00:07:50.379 -g, --single-file-segments force creating just one hugetlbfs file 00:07:50.379 00:07:50.379 PCI options: 00:07:50.379 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:50.379 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:50.379 -u, --no-pci disable PCI access 00:07:50.379 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:50.379 00:07:50.379 Log options: 00:07:50.379 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:50.380 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:50.380 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:50.380 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:50.380 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:07:50.380 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:07:50.380 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:07:50.380 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:07:50.380 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:07:50.380 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:07:50.380 virtio_vfio_user, vmd) 00:07:50.380 --silence-noticelog disable notice level logging to stderr 00:07:50.380 00:07:50.380 Trace options: 00:07:50.380 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:50.380 setting 0 to disable trace (default 32768) 00:07:50.380 Tracepoints vary in size and can use more than one trace entry. 00:07:50.380 -e, --tpoint-group [:] 00:07:50.380 group_name - tracep/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:50.380 [2024-07-15 22:19:03.818049] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:50.380 oint group name for spdk trace buffers (bdev, ftl, 00:07:50.380 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:07:50.380 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:50.380 a tracepoint group. First tpoint inside a group can be enabled by 00:07:50.380 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:50.380 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:50.380 in /include/spdk_internal/trace_defs.h 00:07:50.380 00:07:50.380 Other options: 00:07:50.380 -h, --help show this usage 00:07:50.380 -v, --version print SPDK version 00:07:50.380 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:50.380 --env-context Opaque context for use of the env implementation 00:07:50.380 00:07:50.380 Application specific: 00:07:50.380 [--------- DD Options ---------] 00:07:50.380 --if Input file. Must specify either --if or --ib. 00:07:50.380 --ib Input bdev. Must specifier either --if or --ib 00:07:50.380 --of Output file. Must specify either --of or --ob. 00:07:50.380 --ob Output bdev. Must specify either --of or --ob. 00:07:50.380 --iflag Input file flags. 00:07:50.380 --oflag Output file flags. 00:07:50.380 --bs I/O unit size (default: 4096) 00:07:50.380 --qd Queue depth (default: 2) 00:07:50.380 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:50.380 --skip Skip this many I/O units at start of input. (default: 0) 00:07:50.380 --seek Skip this many I/O units at start of output. (default: 0) 00:07:50.380 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:50.380 --sparse Enable hole skipping in input target 00:07:50.380 Available iflag and oflag values: 00:07:50.380 append - append mode 00:07:50.380 direct - use direct I/O for data 00:07:50.380 directory - fail unless a directory 00:07:50.380 dsync - use synchronized I/O for data 00:07:50.380 noatime - do not update access time 00:07:50.380 noctty - do not assign controlling terminal from file 00:07:50.380 nofollow - do not follow symlinks 00:07:50.380 nonblock - use non-blocking I/O 00:07:50.380 sync - use synchronized I/O for data and metadata 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:50.380 00:07:50.380 real 0m0.070s 00:07:50.380 user 0m0.038s 00:07:50.380 sys 0m0.031s 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:50.380 ************************************ 00:07:50.380 END TEST dd_invalid_arguments 00:07:50.380 ************************************ 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:50.380 ************************************ 00:07:50.380 START TEST dd_double_input 00:07:50.380 ************************************ 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:50.380 [2024-07-15 22:19:03.963999] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:50.380 00:07:50.380 real 0m0.072s 00:07:50.380 user 0m0.033s 00:07:50.380 sys 0m0.038s 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.380 22:19:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:50.380 ************************************ 00:07:50.380 END TEST dd_double_input 00:07:50.380 ************************************ 00:07:50.640 22:19:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:50.640 22:19:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:50.640 22:19:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.640 22:19:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.640 22:19:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:50.640 ************************************ 00:07:50.640 START TEST dd_double_output 00:07:50.640 ************************************ 00:07:50.640 22:19:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:07:50.640 22:19:04 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:50.640 22:19:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:07:50.640 22:19:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:50.640 22:19:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.640 22:19:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.640 22:19:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.640 22:19:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.640 22:19:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.640 22:19:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.640 22:19:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.640 22:19:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:50.640 22:19:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:50.641 [2024-07-15 22:19:04.110582] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:50.641 00:07:50.641 real 0m0.074s 00:07:50.641 user 0m0.044s 00:07:50.641 sys 0m0.028s 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:50.641 ************************************ 00:07:50.641 END TEST dd_double_output 00:07:50.641 ************************************ 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:50.641 ************************************ 00:07:50.641 START TEST dd_no_input 00:07:50.641 ************************************ 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:50.641 [2024-07-15 22:19:04.255706] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:50.641 00:07:50.641 real 0m0.072s 00:07:50.641 user 0m0.040s 00:07:50.641 sys 0m0.031s 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.641 ************************************ 00:07:50.641 END TEST dd_no_input 00:07:50.641 22:19:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:50.641 ************************************ 00:07:50.899 22:19:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:50.899 22:19:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:50.899 22:19:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.899 22:19:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.899 22:19:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:50.899 ************************************ 00:07:50.899 START TEST dd_no_output 00:07:50.899 ************************************ 00:07:50.899 22:19:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:07:50.899 22:19:04 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.899 22:19:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:07:50.899 22:19:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.899 22:19:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.899 22:19:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.899 22:19:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.899 22:19:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.899 22:19:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.899 22:19:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.899 22:19:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.899 22:19:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:50.900 22:19:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.900 [2024-07-15 22:19:04.395536] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:50.900 22:19:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:07:50.900 22:19:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:50.900 22:19:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:50.900 22:19:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:50.900 ************************************ 00:07:50.900 00:07:50.900 real 0m0.068s 00:07:50.900 user 0m0.045s 00:07:50.900 sys 0m0.023s 00:07:50.900 22:19:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.900 22:19:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:50.900 END TEST dd_no_output 00:07:50.900 ************************************ 00:07:50.900 22:19:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:50.900 22:19:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:50.900 22:19:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.900 22:19:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.900 22:19:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:50.900 ************************************ 00:07:50.900 START TEST dd_wrong_blocksize 00:07:50.900 ************************************ 00:07:50.900 22:19:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:07:50.900 22:19:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:50.900 22:19:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:07:50.900 22:19:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:50.900 22:19:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.900 22:19:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.900 22:19:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.900 22:19:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.900 22:19:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.900 22:19:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.900 22:19:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.900 22:19:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:50.900 22:19:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:50.900 [2024-07-15 22:19:04.527661] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:51.158 22:19:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:07:51.158 22:19:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:51.158 22:19:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:51.158 ************************************ 00:07:51.158 END TEST dd_wrong_blocksize 00:07:51.158 ************************************ 00:07:51.158 22:19:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:51.158 00:07:51.158 real 0m0.068s 00:07:51.158 user 0m0.034s 00:07:51.158 sys 0m0.033s 00:07:51.158 22:19:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.158 22:19:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:51.158 22:19:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:51.158 22:19:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:51.158 22:19:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:51.158 22:19:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.158 22:19:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:51.158 ************************************ 00:07:51.158 START TEST dd_smaller_blocksize 00:07:51.158 ************************************ 00:07:51.158 22:19:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:07:51.158 22:19:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:51.158 22:19:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:07:51.158 22:19:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:51.158 22:19:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.158 22:19:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.158 22:19:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.158 22:19:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.158 22:19:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.158 22:19:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.158 22:19:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.158 22:19:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:51.158 22:19:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:51.158 [2024-07-15 22:19:04.671178] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:51.158 [2024-07-15 22:19:04.671260] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64857 ] 00:07:51.416 [2024-07-15 22:19:04.813491] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.416 [2024-07-15 22:19:04.962662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.416 [2024-07-15 22:19:05.035194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:52.034 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:52.295 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:52.295 [2024-07-15 22:19:05.804862] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:52.295 [2024-07-15 22:19:05.804963] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.554 [2024-07-15 22:19:05.965532] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:52.554 22:19:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:07:52.554 22:19:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:52.554 22:19:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:07:52.554 22:19:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:07:52.554 22:19:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:07:52.554 22:19:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:52.554 00:07:52.554 real 0m1.482s 00:07:52.554 user 0m0.592s 00:07:52.554 sys 0m0.781s 00:07:52.554 22:19:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.554 ************************************ 00:07:52.554 END TEST dd_smaller_blocksize 00:07:52.554 ************************************ 00:07:52.554 22:19:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:52.554 22:19:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:52.554 22:19:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:52.554 22:19:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:52.554 22:19:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.554 22:19:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:52.554 ************************************ 00:07:52.554 START TEST dd_invalid_count 00:07:52.554 ************************************ 00:07:52.554 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:07:52.554 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:52.554 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:07:52.554 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:52.554 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.554 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:52.554 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.554 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:52.554 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.554 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:52.554 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.554 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:52.554 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:52.813 [2024-07-15 22:19:06.231754] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:52.813 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:07:52.813 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:52.813 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:52.813 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:52.813 00:07:52.813 real 0m0.077s 00:07:52.813 user 0m0.046s 00:07:52.813 sys 0m0.029s 00:07:52.813 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.813 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:52.813 ************************************ 00:07:52.813 END TEST dd_invalid_count 00:07:52.813 ************************************ 00:07:52.813 22:19:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:52.813 22:19:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:52.813 22:19:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:52.813 22:19:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.813 22:19:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:52.813 ************************************ 00:07:52.813 START TEST dd_invalid_oflag 00:07:52.813 ************************************ 00:07:52.813 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:07:52.813 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:52.813 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:07:52.813 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:52.813 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.813 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:52.814 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.814 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:52.814 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.814 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:52.814 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.814 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:52.814 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:52.814 [2024-07-15 22:19:06.377734] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:52.814 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:07:52.814 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:52.814 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:52.814 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:52.814 00:07:52.814 real 0m0.072s 00:07:52.814 user 0m0.036s 00:07:52.814 sys 0m0.034s 00:07:52.814 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.814 ************************************ 00:07:52.814 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 END TEST dd_invalid_oflag 00:07:52.814 ************************************ 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:53.073 ************************************ 00:07:53.073 START TEST dd_invalid_iflag 00:07:53.073 ************************************ 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:53.073 [2024-07-15 22:19:06.524953] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:53.073 00:07:53.073 real 0m0.072s 00:07:53.073 user 0m0.044s 00:07:53.073 sys 0m0.026s 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:53.073 ************************************ 00:07:53.073 END TEST dd_invalid_iflag 00:07:53.073 ************************************ 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:53.073 ************************************ 00:07:53.073 START TEST dd_unknown_flag 00:07:53.073 ************************************ 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:53.073 22:19:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:53.073 [2024-07-15 22:19:06.669309] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:53.073 [2024-07-15 22:19:06.669395] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64955 ] 00:07:53.332 [2024-07-15 22:19:06.812757] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.332 [2024-07-15 22:19:06.959245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.590 [2024-07-15 22:19:07.032384] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:53.590 [2024-07-15 22:19:07.078311] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:53.590 [2024-07-15 22:19:07.078373] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:53.590 [2024-07-15 22:19:07.078435] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:53.590 [2024-07-15 22:19:07.078445] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:53.590 [2024-07-15 22:19:07.078722] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:53.590 [2024-07-15 22:19:07.078738] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:53.590 [2024-07-15 22:19:07.078802] app.c:1045:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:53.590 [2024-07-15 22:19:07.078810] app.c:1045:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:53.849 [2024-07-15 22:19:07.237869] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:53.849 00:07:53.849 real 0m0.757s 00:07:53.849 user 0m0.444s 00:07:53.849 sys 0m0.221s 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:53.849 ************************************ 00:07:53.849 END TEST dd_unknown_flag 00:07:53.849 ************************************ 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:53.849 ************************************ 00:07:53.849 START TEST dd_invalid_json 00:07:53.849 ************************************ 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:53.849 22:19:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:54.108 [2024-07-15 22:19:07.502154] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:54.108 [2024-07-15 22:19:07.502240] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64989 ] 00:07:54.108 [2024-07-15 22:19:07.646565] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.374 [2024-07-15 22:19:07.796129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.374 [2024-07-15 22:19:07.796207] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:54.374 [2024-07-15 22:19:07.796223] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:54.374 [2024-07-15 22:19:07.796232] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:54.374 [2024-07-15 22:19:07.796271] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:54.374 22:19:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:07:54.374 22:19:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:54.374 22:19:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:07:54.374 22:19:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:07:54.374 22:19:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:07:54.374 22:19:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:54.374 00:07:54.374 real 0m0.484s 00:07:54.374 user 0m0.291s 00:07:54.374 sys 0m0.092s 00:07:54.374 22:19:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.374 22:19:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:54.374 ************************************ 00:07:54.374 END TEST dd_invalid_json 00:07:54.374 ************************************ 00:07:54.374 22:19:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:54.374 00:07:54.374 real 0m4.382s 00:07:54.374 user 0m2.015s 00:07:54.374 sys 0m2.036s 00:07:54.374 22:19:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.374 22:19:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:54.374 ************************************ 00:07:54.374 END TEST spdk_dd_negative 00:07:54.374 ************************************ 00:07:54.633 22:19:08 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:54.633 00:07:54.633 real 1m32.397s 00:07:54.633 user 0m59.305s 00:07:54.633 sys 0m41.575s 00:07:54.633 22:19:08 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.633 22:19:08 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:54.633 ************************************ 00:07:54.634 END TEST spdk_dd 00:07:54.634 ************************************ 00:07:54.634 22:19:08 -- common/autotest_common.sh@1142 -- # return 0 00:07:54.634 22:19:08 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:54.634 22:19:08 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:54.634 22:19:08 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:54.634 22:19:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:54.634 22:19:08 -- common/autotest_common.sh@10 -- # set +x 00:07:54.634 22:19:08 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:54.634 22:19:08 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:54.634 22:19:08 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:54.634 22:19:08 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:54.634 22:19:08 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:54.634 22:19:08 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:54.634 22:19:08 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:54.634 22:19:08 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:54.634 22:19:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.634 22:19:08 -- common/autotest_common.sh@10 -- # set +x 00:07:54.634 ************************************ 00:07:54.634 START TEST nvmf_tcp 00:07:54.634 ************************************ 00:07:54.634 22:19:08 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:54.892 * Looking for test storage... 00:07:54.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:54.892 22:19:08 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:54.892 22:19:08 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:54.892 22:19:08 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:54.892 22:19:08 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:54.892 22:19:08 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.892 22:19:08 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.892 22:19:08 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.892 22:19:08 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:54.893 22:19:08 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.893 22:19:08 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.893 22:19:08 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.893 22:19:08 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.893 22:19:08 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.893 22:19:08 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.893 22:19:08 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:54.893 22:19:08 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:54.893 22:19:08 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:54.893 22:19:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:07:54.893 22:19:08 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:54.893 22:19:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:54.893 22:19:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.893 22:19:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:54.893 ************************************ 00:07:54.893 START TEST nvmf_host_management 00:07:54.893 ************************************ 00:07:54.893 22:19:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:54.893 * Looking for test storage... 00:07:54.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:54.893 22:19:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:54.893 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:55.152 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:55.153 Cannot find device "nvmf_init_br" 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:55.153 Cannot find device "nvmf_tgt_br" 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:55.153 Cannot find device "nvmf_tgt_br2" 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:55.153 Cannot find device "nvmf_init_br" 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:55.153 Cannot find device "nvmf_tgt_br" 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:55.153 Cannot find device "nvmf_tgt_br2" 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:55.153 Cannot find device "nvmf_br" 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:55.153 Cannot find device "nvmf_init_if" 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:55.153 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:55.153 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:55.153 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:55.412 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:55.412 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:55.412 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:55.412 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:55.412 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:55.412 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:55.412 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:55.412 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:55.412 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:55.412 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:55.412 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:55.412 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:55.412 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:55.412 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:55.412 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:55.412 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:55.412 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:55.412 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:55.412 22:19:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:55.412 22:19:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:55.412 22:19:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:55.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:55.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:07:55.412 00:07:55.412 --- 10.0.0.2 ping statistics --- 00:07:55.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.412 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:07:55.412 22:19:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:55.412 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:55.412 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:07:55.412 00:07:55.412 --- 10.0.0.3 ping statistics --- 00:07:55.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.412 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:07:55.671 22:19:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:55.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:55.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:07:55.671 00:07:55.671 --- 10.0.0.1 ping statistics --- 00:07:55.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.671 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:07:55.671 22:19:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:55.671 22:19:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:07:55.671 22:19:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:55.671 22:19:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:55.671 22:19:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:55.671 22:19:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:55.671 22:19:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:55.671 22:19:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:55.671 22:19:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:55.671 22:19:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:55.671 22:19:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:55.671 22:19:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:55.671 22:19:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:55.671 22:19:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:55.671 22:19:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.671 22:19:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=65254 00:07:55.671 22:19:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 65254 00:07:55.671 22:19:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 65254 ']' 00:07:55.671 22:19:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.671 22:19:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:55.671 22:19:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.671 22:19:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:55.671 22:19:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:55.671 22:19:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.671 [2024-07-15 22:19:09.173268] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:55.672 [2024-07-15 22:19:09.173388] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.929 [2024-07-15 22:19:09.328704] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:55.929 [2024-07-15 22:19:09.433421] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:55.929 [2024-07-15 22:19:09.433483] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:55.929 [2024-07-15 22:19:09.433493] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:55.929 [2024-07-15 22:19:09.433501] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:55.929 [2024-07-15 22:19:09.433508] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:55.929 [2024-07-15 22:19:09.433714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.929 [2024-07-15 22:19:09.434695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:55.929 [2024-07-15 22:19:09.434810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.929 [2024-07-15 22:19:09.434812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:55.929 [2024-07-15 22:19:09.478510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:56.493 22:19:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.493 22:19:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:56.493 22:19:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:56.493 22:19:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:56.493 22:19:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.493 22:19:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:56.493 22:19:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:56.493 22:19:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.493 22:19:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.493 [2024-07-15 22:19:10.092248] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.493 22:19:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.493 22:19:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:56.493 22:19:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:56.493 22:19:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.493 22:19:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:56.493 22:19:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:56.751 22:19:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:56.751 22:19:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.751 22:19:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.751 Malloc0 00:07:56.751 [2024-07-15 22:19:10.171382] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.751 22:19:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.751 22:19:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:56.751 22:19:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:56.751 22:19:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.751 22:19:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65310 00:07:56.751 22:19:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65310 /var/tmp/bdevperf.sock 00:07:56.751 22:19:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 65310 ']' 00:07:56.751 22:19:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:56.751 22:19:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:56.751 22:19:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:56.751 22:19:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:56.751 22:19:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:56.751 22:19:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:56.751 22:19:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:56.751 22:19:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:56.751 22:19:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:56.751 { 00:07:56.751 "params": { 00:07:56.751 "name": "Nvme$subsystem", 00:07:56.751 "trtype": "$TEST_TRANSPORT", 00:07:56.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:56.751 "adrfam": "ipv4", 00:07:56.751 "trsvcid": "$NVMF_PORT", 00:07:56.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:56.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:56.751 "hdgst": ${hdgst:-false}, 00:07:56.751 "ddgst": ${ddgst:-false} 00:07:56.751 }, 00:07:56.751 "method": "bdev_nvme_attach_controller" 00:07:56.751 } 00:07:56.751 EOF 00:07:56.751 )") 00:07:56.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:56.751 22:19:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:56.751 22:19:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.751 22:19:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:56.751 22:19:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:56.751 22:19:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:56.751 22:19:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:56.751 "params": { 00:07:56.751 "name": "Nvme0", 00:07:56.751 "trtype": "tcp", 00:07:56.751 "traddr": "10.0.0.2", 00:07:56.751 "adrfam": "ipv4", 00:07:56.751 "trsvcid": "4420", 00:07:56.751 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:56.751 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:56.751 "hdgst": false, 00:07:56.751 "ddgst": false 00:07:56.751 }, 00:07:56.751 "method": "bdev_nvme_attach_controller" 00:07:56.751 }' 00:07:56.751 [2024-07-15 22:19:10.294979] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:56.751 [2024-07-15 22:19:10.295056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65310 ] 00:07:57.009 [2024-07-15 22:19:10.439703] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.009 [2024-07-15 22:19:10.591449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.267 [2024-07-15 22:19:10.674018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:57.267 Running I/O for 10 seconds... 00:07:57.526 22:19:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:57.526 22:19:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:57.526 22:19:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:57.526 22:19:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.526 22:19:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.785 22:19:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.785 22:19:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:57.785 22:19:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:57.785 22:19:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:57.785 22:19:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:57.785 22:19:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:57.785 22:19:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:57.785 22:19:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:57.785 22:19:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:57.785 22:19:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:57.785 22:19:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.785 22:19:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.785 22:19:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:57.785 22:19:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.785 22:19:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:07:57.785 22:19:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:07:57.785 22:19:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:57.785 22:19:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:57.785 22:19:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:57.785 22:19:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:57.785 22:19:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.785 22:19:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.785 [2024-07-15 22:19:11.198893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.785 [2024-07-15 22:19:11.198964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.785 [2024-07-15 22:19:11.198991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.785 [2024-07-15 22:19:11.199002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.785 [2024-07-15 22:19:11.199013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.785 [2024-07-15 22:19:11.199022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.785 [2024-07-15 22:19:11.199033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.785 [2024-07-15 22:19:11.199042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.785 [2024-07-15 22:19:11.199054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.785 [2024-07-15 22:19:11.199062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.785 [2024-07-15 22:19:11.199073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.785 [2024-07-15 22:19:11.199081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.785 [2024-07-15 22:19:11.199091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.785 [2024-07-15 22:19:11.199100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.786 [2024-07-15 22:19:11.199937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.786 [2024-07-15 22:19:11.199948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.787 [2024-07-15 22:19:11.199956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.787 [2024-07-15 22:19:11.199968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.787 [2024-07-15 22:19:11.199976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.787 [2024-07-15 22:19:11.199987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.787 [2024-07-15 22:19:11.199995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.787 [2024-07-15 22:19:11.200006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.787 [2024-07-15 22:19:11.200014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.787 [2024-07-15 22:19:11.200025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.787 [2024-07-15 22:19:11.200033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.787 [2024-07-15 22:19:11.200043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.787 [2024-07-15 22:19:11.200054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.787 [2024-07-15 22:19:11.200065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.787 [2024-07-15 22:19:11.200073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.787 [2024-07-15 22:19:11.200084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.787 [2024-07-15 22:19:11.200092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.787 [2024-07-15 22:19:11.200102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.787 [2024-07-15 22:19:11.200111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.787 [2024-07-15 22:19:11.200120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.787 [2024-07-15 22:19:11.200128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.787 [2024-07-15 22:19:11.200138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.787 [2024-07-15 22:19:11.200147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.787 [2024-07-15 22:19:11.200157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.787 [2024-07-15 22:19:11.200165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.787 [2024-07-15 22:19:11.200175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.787 [2024-07-15 22:19:11.200183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.787 [2024-07-15 22:19:11.200194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.787 [2024-07-15 22:19:11.200202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.787 [2024-07-15 22:19:11.200212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.787 [2024-07-15 22:19:11.200221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.787 [2024-07-15 22:19:11.200232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.787 [2024-07-15 22:19:11.200242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:57.787 [2024-07-15 22:19:11.200252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23af1c0 is same with the state(5) to be set 00:07:57.787 task offset: 93056 on job bdev=Nvme0n1 fails 00:07:57.787 00:07:57.787 Latency(us) 00:07:57.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.787 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:57.787 Job: Nvme0n1 ended in about 0.40 seconds with error 00:07:57.787 Verification LBA range: start 0x0 length 0x400 00:07:57.787 Nvme0n1 : 0.40 1779.40 111.21 161.76 0.00 32036.94 2052.93 31794.17 00:07:57.787 =================================================================================================================== 00:07:57.787 Total : 1779.40 111.21 161.76 0.00 32036.94 2052.93 31794.17 00:07:57.787 [2024-07-15 22:19:11.200346] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23af1c0 was disconnected and freed. reset controller. 00:07:57.787 [2024-07-15 22:19:11.201378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:57.787 22:19:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.787 22:19:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:57.787 22:19:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.787 22:19:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.787 [2024-07-15 22:19:11.203962] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:57.787 [2024-07-15 22:19:11.203989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a6ef0 (9): Bad file descriptor 00:07:57.787 22:19:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.787 22:19:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:57.787 [2024-07-15 22:19:11.211765] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:58.723 22:19:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65310 00:07:58.723 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65310) - No such process 00:07:58.723 22:19:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:58.723 22:19:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:58.723 22:19:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:58.723 22:19:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:58.723 22:19:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:58.723 22:19:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:58.723 22:19:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:58.723 22:19:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:58.723 { 00:07:58.723 "params": { 00:07:58.723 "name": "Nvme$subsystem", 00:07:58.723 "trtype": "$TEST_TRANSPORT", 00:07:58.723 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:58.723 "adrfam": "ipv4", 00:07:58.723 "trsvcid": "$NVMF_PORT", 00:07:58.723 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:58.723 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:58.723 "hdgst": ${hdgst:-false}, 00:07:58.723 "ddgst": ${ddgst:-false} 00:07:58.723 }, 00:07:58.723 "method": "bdev_nvme_attach_controller" 00:07:58.723 } 00:07:58.723 EOF 00:07:58.723 )") 00:07:58.723 22:19:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:58.723 22:19:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:58.723 22:19:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:58.723 22:19:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:58.723 "params": { 00:07:58.723 "name": "Nvme0", 00:07:58.723 "trtype": "tcp", 00:07:58.723 "traddr": "10.0.0.2", 00:07:58.723 "adrfam": "ipv4", 00:07:58.723 "trsvcid": "4420", 00:07:58.723 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:58.723 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:58.723 "hdgst": false, 00:07:58.723 "ddgst": false 00:07:58.723 }, 00:07:58.723 "method": "bdev_nvme_attach_controller" 00:07:58.723 }' 00:07:58.723 [2024-07-15 22:19:12.265991] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:58.723 [2024-07-15 22:19:12.266069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65346 ] 00:07:58.983 [2024-07-15 22:19:12.398428] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.983 [2024-07-15 22:19:12.553222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.241 [2024-07-15 22:19:12.634876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:59.241 Running I/O for 1 seconds... 00:08:00.174 00:08:00.174 Latency(us) 00:08:00.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.174 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:00.174 Verification LBA range: start 0x0 length 0x400 00:08:00.174 Nvme0n1 : 1.03 1807.23 112.95 0.00 0.00 34836.92 4526.98 45690.96 00:08:00.174 =================================================================================================================== 00:08:00.174 Total : 1807.23 112.95 0.00 0.00 34836.92 4526.98 45690.96 00:08:00.738 22:19:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:00.738 22:19:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:00.738 22:19:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:00.738 22:19:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:00.738 22:19:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:00.738 22:19:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:00.738 22:19:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:00.738 22:19:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:00.738 22:19:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:00.738 22:19:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:00.738 22:19:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:00.738 rmmod nvme_tcp 00:08:00.738 rmmod nvme_fabrics 00:08:00.738 rmmod nvme_keyring 00:08:00.738 22:19:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:00.738 22:19:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:00.738 22:19:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:00.738 22:19:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 65254 ']' 00:08:00.738 22:19:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 65254 00:08:00.738 22:19:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 65254 ']' 00:08:00.738 22:19:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 65254 00:08:00.738 22:19:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:08:00.738 22:19:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:00.738 22:19:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65254 00:08:00.738 killing process with pid 65254 00:08:00.738 22:19:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:00.738 22:19:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:00.738 22:19:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65254' 00:08:00.738 22:19:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 65254 00:08:00.738 22:19:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 65254 00:08:00.995 [2024-07-15 22:19:14.493616] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:00.995 22:19:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:00.995 22:19:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:00.995 22:19:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:00.995 22:19:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:00.995 22:19:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:00.995 22:19:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.995 22:19:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.995 22:19:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.995 22:19:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:00.995 22:19:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:00.995 00:08:00.995 real 0m6.206s 00:08:00.995 user 0m22.738s 00:08:00.995 ************************************ 00:08:00.995 END TEST nvmf_host_management 00:08:00.995 ************************************ 00:08:00.995 sys 0m1.925s 00:08:00.995 22:19:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.995 22:19:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:01.253 22:19:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:01.253 22:19:14 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:01.253 22:19:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:01.253 22:19:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.253 22:19:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:01.253 ************************************ 00:08:01.253 START TEST nvmf_lvol 00:08:01.253 ************************************ 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:01.253 * Looking for test storage... 00:08:01.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:01.253 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:01.254 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:01.254 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:01.254 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:01.254 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.254 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:01.254 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:01.254 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:01.254 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:01.254 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:01.254 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:01.254 Cannot find device "nvmf_tgt_br" 00:08:01.254 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:08:01.254 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:01.254 Cannot find device "nvmf_tgt_br2" 00:08:01.254 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:08:01.254 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:01.511 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:01.511 Cannot find device "nvmf_tgt_br" 00:08:01.511 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:08:01.511 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:01.511 Cannot find device "nvmf_tgt_br2" 00:08:01.511 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:08:01.511 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:01.511 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:01.511 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:01.511 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:01.511 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:01.511 22:19:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:01.511 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:01.511 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:01.511 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:01.511 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:01.511 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:01.511 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:01.511 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:01.511 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:01.511 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:01.511 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:01.511 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:01.511 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:01.511 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:01.511 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:01.511 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:01.511 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:01.511 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:01.511 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:01.511 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:01.511 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:01.511 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:01.769 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:01.769 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:01.769 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:01.769 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:01.769 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:01.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:08:01.769 00:08:01.769 --- 10.0.0.2 ping statistics --- 00:08:01.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.769 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:08:01.769 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:01.769 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:01.769 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:08:01.769 00:08:01.769 --- 10.0.0.3 ping statistics --- 00:08:01.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.769 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:08:01.769 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:01.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:08:01.769 00:08:01.769 --- 10.0.0.1 ping statistics --- 00:08:01.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.770 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:08:01.770 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.770 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:08:01.770 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:01.770 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.770 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:01.770 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:01.770 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.770 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:01.770 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:01.770 22:19:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:01.770 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:01.770 22:19:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:01.770 22:19:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:01.770 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=65568 00:08:01.770 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:01.770 22:19:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 65568 00:08:01.770 22:19:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 65568 ']' 00:08:01.770 22:19:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.770 22:19:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:01.770 22:19:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.770 22:19:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:01.770 22:19:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:01.770 [2024-07-15 22:19:15.325366] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:08:01.770 [2024-07-15 22:19:15.325444] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.027 [2024-07-15 22:19:15.470978] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:02.027 [2024-07-15 22:19:15.619448] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.027 [2024-07-15 22:19:15.619725] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.027 [2024-07-15 22:19:15.620341] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.027 [2024-07-15 22:19:15.620522] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.027 [2024-07-15 22:19:15.620584] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.027 [2024-07-15 22:19:15.620766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.027 [2024-07-15 22:19:15.620890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.027 [2024-07-15 22:19:15.620889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.326 [2024-07-15 22:19:15.696324] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:02.584 22:19:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:02.584 22:19:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:08:02.584 22:19:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:02.584 22:19:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:02.584 22:19:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:02.842 22:19:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.842 22:19:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:02.842 [2024-07-15 22:19:16.418813] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.842 22:19:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:03.100 22:19:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:03.100 22:19:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:03.358 22:19:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:03.358 22:19:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:03.616 22:19:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:03.875 22:19:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=31fc7993-16c9-478b-8e9e-7b0220229b08 00:08:03.875 22:19:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31fc7993-16c9-478b-8e9e-7b0220229b08 lvol 20 00:08:04.133 22:19:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a25fbfee-22ae-4cc5-90c2-e215fc99f961 00:08:04.133 22:19:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:04.133 22:19:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a25fbfee-22ae-4cc5-90c2-e215fc99f961 00:08:04.392 22:19:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:04.650 [2024-07-15 22:19:18.103609] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.650 22:19:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:04.909 22:19:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65638 00:08:04.909 22:19:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:04.909 22:19:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:05.874 22:19:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot a25fbfee-22ae-4cc5-90c2-e215fc99f961 MY_SNAPSHOT 00:08:06.133 22:19:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b051f7b9-0426-40ac-9cc0-263da682d356 00:08:06.133 22:19:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize a25fbfee-22ae-4cc5-90c2-e215fc99f961 30 00:08:06.392 22:19:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone b051f7b9-0426-40ac-9cc0-263da682d356 MY_CLONE 00:08:06.650 22:19:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=5428f484-e1e8-4501-b6af-6fb1290becc9 00:08:06.650 22:19:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 5428f484-e1e8-4501-b6af-6fb1290becc9 00:08:07.218 22:19:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65638 00:08:15.369 Initializing NVMe Controllers 00:08:15.369 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:15.369 Controller IO queue size 128, less than required. 00:08:15.369 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:15.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:15.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:15.369 Initialization complete. Launching workers. 00:08:15.369 ======================================================== 00:08:15.369 Latency(us) 00:08:15.369 Device Information : IOPS MiB/s Average min max 00:08:15.369 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7778.79 30.39 16463.26 1872.28 95231.87 00:08:15.369 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 5872.29 22.94 21800.72 199.77 143933.70 00:08:15.369 ======================================================== 00:08:15.369 Total : 13651.09 53.32 18759.28 199.77 143933.70 00:08:15.369 00:08:15.369 22:19:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:15.369 22:19:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a25fbfee-22ae-4cc5-90c2-e215fc99f961 00:08:15.626 22:19:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 31fc7993-16c9-478b-8e9e-7b0220229b08 00:08:15.884 22:19:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:15.884 22:19:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:15.884 22:19:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:15.884 22:19:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:15.884 22:19:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:15.884 22:19:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:15.884 22:19:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:15.884 22:19:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:15.884 22:19:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:15.884 rmmod nvme_tcp 00:08:15.884 rmmod nvme_fabrics 00:08:15.884 rmmod nvme_keyring 00:08:15.884 22:19:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:15.884 22:19:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:15.884 22:19:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:15.884 22:19:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 65568 ']' 00:08:15.884 22:19:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 65568 00:08:15.884 22:19:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 65568 ']' 00:08:15.884 22:19:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 65568 00:08:15.884 22:19:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:08:15.884 22:19:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:15.884 22:19:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65568 00:08:15.884 22:19:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:15.884 22:19:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:15.884 killing process with pid 65568 00:08:15.884 22:19:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65568' 00:08:15.884 22:19:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 65568 00:08:15.884 22:19:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 65568 00:08:16.472 22:19:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:16.472 22:19:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:16.472 22:19:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:16.472 22:19:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:16.472 22:19:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:16.472 22:19:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.472 22:19:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.472 22:19:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.472 22:19:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:16.472 00:08:16.472 real 0m15.248s 00:08:16.472 user 1m1.780s 00:08:16.472 sys 0m5.000s 00:08:16.472 22:19:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.472 22:19:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:16.472 ************************************ 00:08:16.472 END TEST nvmf_lvol 00:08:16.472 ************************************ 00:08:16.472 22:19:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:16.472 22:19:29 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:16.472 22:19:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:16.472 22:19:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.472 22:19:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:16.472 ************************************ 00:08:16.472 START TEST nvmf_lvs_grow 00:08:16.472 ************************************ 00:08:16.472 22:19:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:16.472 * Looking for test storage... 00:08:16.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:16.472 22:19:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:16.732 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:16.732 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.732 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.732 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.732 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.732 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.732 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.732 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.732 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:16.733 Cannot find device "nvmf_tgt_br" 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:16.733 Cannot find device "nvmf_tgt_br2" 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:16.733 Cannot find device "nvmf_tgt_br" 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:16.733 Cannot find device "nvmf_tgt_br2" 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:16.733 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:16.733 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:16.734 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:16.734 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:16.734 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:16.734 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:16.734 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:16.734 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:16.734 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:16.734 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:16.998 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:16.998 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:16.998 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:16.998 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:16.998 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:16.998 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:16.998 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:16.998 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:16.998 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:16.998 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:16.998 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:16.998 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:16.998 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:16.998 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:16.998 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:16.998 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:16.998 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:16.998 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:16.998 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:16.998 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:16.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:08:16.998 00:08:16.998 --- 10.0.0.2 ping statistics --- 00:08:16.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.998 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:08:16.998 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:16.998 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:16.998 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:08:16.998 00:08:16.998 --- 10.0.0.3 ping statistics --- 00:08:16.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.998 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:16.998 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:17.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:08:17.256 00:08:17.256 --- 10.0.0.1 ping statistics --- 00:08:17.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.256 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:17.256 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.256 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:08:17.256 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:17.256 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.256 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:17.256 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:17.256 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.256 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:17.256 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:17.256 22:19:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:17.256 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:17.256 22:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:17.256 22:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:17.256 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=65959 00:08:17.256 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:17.256 22:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 65959 00:08:17.256 22:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 65959 ']' 00:08:17.256 22:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.256 22:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:17.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.256 22:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.256 22:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:17.256 22:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:17.256 [2024-07-15 22:19:30.729265] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:08:17.256 [2024-07-15 22:19:30.729350] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.256 [2024-07-15 22:19:30.874677] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.514 [2024-07-15 22:19:30.974482] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.514 [2024-07-15 22:19:30.974535] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.514 [2024-07-15 22:19:30.974545] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.514 [2024-07-15 22:19:30.974553] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.514 [2024-07-15 22:19:30.974560] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.514 [2024-07-15 22:19:30.974586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.514 [2024-07-15 22:19:31.016533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:18.081 22:19:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:18.081 22:19:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:08:18.081 22:19:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:18.081 22:19:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:18.081 22:19:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:18.388 22:19:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.388 22:19:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:18.388 [2024-07-15 22:19:31.944402] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.388 22:19:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:18.388 22:19:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:18.388 22:19:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.388 22:19:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:18.388 ************************************ 00:08:18.388 START TEST lvs_grow_clean 00:08:18.388 ************************************ 00:08:18.388 22:19:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:08:18.388 22:19:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:18.388 22:19:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:18.388 22:19:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:18.388 22:19:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:18.388 22:19:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:18.388 22:19:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:18.388 22:19:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:18.388 22:19:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:18.388 22:19:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:18.648 22:19:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:18.648 22:19:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:18.906 22:19:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=61465832-4867-49af-889d-11d6e9d19e3f 00:08:18.906 22:19:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61465832-4867-49af-889d-11d6e9d19e3f 00:08:18.906 22:19:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:19.164 22:19:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:19.164 22:19:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:19.164 22:19:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 61465832-4867-49af-889d-11d6e9d19e3f lvol 150 00:08:19.422 22:19:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=fc5df086-e6f7-4517-8b11-f2970100caa6 00:08:19.422 22:19:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:19.422 22:19:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:19.681 [2024-07-15 22:19:33.068985] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:19.681 [2024-07-15 22:19:33.069110] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:19.681 true 00:08:19.681 22:19:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61465832-4867-49af-889d-11d6e9d19e3f 00:08:19.681 22:19:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:19.681 22:19:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:19.681 22:19:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:19.939 22:19:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fc5df086-e6f7-4517-8b11-f2970100caa6 00:08:20.197 22:19:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:20.455 [2024-07-15 22:19:33.876213] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.455 22:19:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:20.455 22:19:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66041 00:08:20.455 22:19:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:20.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:20.455 22:19:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66041 /var/tmp/bdevperf.sock 00:08:20.455 22:19:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 66041 ']' 00:08:20.456 22:19:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:20.456 22:19:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:20.456 22:19:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:20.456 22:19:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:20.456 22:19:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:20.456 22:19:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:20.714 [2024-07-15 22:19:34.136348] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:08:20.714 [2024-07-15 22:19:34.136438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66041 ] 00:08:20.714 [2024-07-15 22:19:34.280513] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.972 [2024-07-15 22:19:34.382222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.973 [2024-07-15 22:19:34.424552] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:21.540 22:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:21.540 22:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:08:21.540 22:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:21.799 Nvme0n1 00:08:21.799 22:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:22.058 [ 00:08:22.058 { 00:08:22.058 "name": "Nvme0n1", 00:08:22.058 "aliases": [ 00:08:22.058 "fc5df086-e6f7-4517-8b11-f2970100caa6" 00:08:22.058 ], 00:08:22.058 "product_name": "NVMe disk", 00:08:22.058 "block_size": 4096, 00:08:22.058 "num_blocks": 38912, 00:08:22.058 "uuid": "fc5df086-e6f7-4517-8b11-f2970100caa6", 00:08:22.058 "assigned_rate_limits": { 00:08:22.058 "rw_ios_per_sec": 0, 00:08:22.058 "rw_mbytes_per_sec": 0, 00:08:22.058 "r_mbytes_per_sec": 0, 00:08:22.058 "w_mbytes_per_sec": 0 00:08:22.058 }, 00:08:22.058 "claimed": false, 00:08:22.058 "zoned": false, 00:08:22.058 "supported_io_types": { 00:08:22.058 "read": true, 00:08:22.058 "write": true, 00:08:22.058 "unmap": true, 00:08:22.058 "flush": true, 00:08:22.058 "reset": true, 00:08:22.058 "nvme_admin": true, 00:08:22.058 "nvme_io": true, 00:08:22.058 "nvme_io_md": false, 00:08:22.058 "write_zeroes": true, 00:08:22.058 "zcopy": false, 00:08:22.058 "get_zone_info": false, 00:08:22.058 "zone_management": false, 00:08:22.058 "zone_append": false, 00:08:22.058 "compare": true, 00:08:22.058 "compare_and_write": true, 00:08:22.058 "abort": true, 00:08:22.058 "seek_hole": false, 00:08:22.058 "seek_data": false, 00:08:22.058 "copy": true, 00:08:22.058 "nvme_iov_md": false 00:08:22.058 }, 00:08:22.058 "memory_domains": [ 00:08:22.058 { 00:08:22.058 "dma_device_id": "system", 00:08:22.058 "dma_device_type": 1 00:08:22.058 } 00:08:22.058 ], 00:08:22.058 "driver_specific": { 00:08:22.058 "nvme": [ 00:08:22.058 { 00:08:22.058 "trid": { 00:08:22.058 "trtype": "TCP", 00:08:22.058 "adrfam": "IPv4", 00:08:22.058 "traddr": "10.0.0.2", 00:08:22.058 "trsvcid": "4420", 00:08:22.058 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:22.058 }, 00:08:22.058 "ctrlr_data": { 00:08:22.058 "cntlid": 1, 00:08:22.058 "vendor_id": "0x8086", 00:08:22.058 "model_number": "SPDK bdev Controller", 00:08:22.058 "serial_number": "SPDK0", 00:08:22.058 "firmware_revision": "24.09", 00:08:22.058 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:22.058 "oacs": { 00:08:22.058 "security": 0, 00:08:22.058 "format": 0, 00:08:22.058 "firmware": 0, 00:08:22.058 "ns_manage": 0 00:08:22.058 }, 00:08:22.058 "multi_ctrlr": true, 00:08:22.058 "ana_reporting": false 00:08:22.058 }, 00:08:22.058 "vs": { 00:08:22.058 "nvme_version": "1.3" 00:08:22.058 }, 00:08:22.058 "ns_data": { 00:08:22.058 "id": 1, 00:08:22.058 "can_share": true 00:08:22.058 } 00:08:22.058 } 00:08:22.058 ], 00:08:22.058 "mp_policy": "active_passive" 00:08:22.058 } 00:08:22.058 } 00:08:22.058 ] 00:08:22.058 22:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66059 00:08:22.058 22:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:22.058 22:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:22.058 Running I/O for 10 seconds... 00:08:22.993 Latency(us) 00:08:22.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.993 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.993 Nvme0n1 : 1.00 9902.00 38.68 0.00 0.00 0.00 0.00 0.00 00:08:22.993 =================================================================================================================== 00:08:22.993 Total : 9902.00 38.68 0.00 0.00 0.00 0.00 0.00 00:08:22.993 00:08:23.944 22:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 61465832-4867-49af-889d-11d6e9d19e3f 00:08:24.202 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.202 Nvme0n1 : 2.00 9967.50 38.94 0.00 0.00 0.00 0.00 0.00 00:08:24.202 =================================================================================================================== 00:08:24.202 Total : 9967.50 38.94 0.00 0.00 0.00 0.00 0.00 00:08:24.202 00:08:24.202 true 00:08:24.202 22:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:24.202 22:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61465832-4867-49af-889d-11d6e9d19e3f 00:08:24.459 22:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:24.459 22:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:24.459 22:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 66059 00:08:25.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.024 Nvme0n1 : 3.00 9904.67 38.69 0.00 0.00 0.00 0.00 0.00 00:08:25.025 =================================================================================================================== 00:08:25.025 Total : 9904.67 38.69 0.00 0.00 0.00 0.00 0.00 00:08:25.025 00:08:26.395 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.395 Nvme0n1 : 4.00 9746.25 38.07 0.00 0.00 0.00 0.00 0.00 00:08:26.395 =================================================================================================================== 00:08:26.395 Total : 9746.25 38.07 0.00 0.00 0.00 0.00 0.00 00:08:26.395 00:08:26.960 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.960 Nvme0n1 : 5.00 9702.00 37.90 0.00 0.00 0.00 0.00 0.00 00:08:26.960 =================================================================================================================== 00:08:26.960 Total : 9702.00 37.90 0.00 0.00 0.00 0.00 0.00 00:08:26.960 00:08:28.332 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.332 Nvme0n1 : 6.00 9672.50 37.78 0.00 0.00 0.00 0.00 0.00 00:08:28.332 =================================================================================================================== 00:08:28.332 Total : 9672.50 37.78 0.00 0.00 0.00 0.00 0.00 00:08:28.332 00:08:29.269 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.269 Nvme0n1 : 7.00 9667.86 37.77 0.00 0.00 0.00 0.00 0.00 00:08:29.269 =================================================================================================================== 00:08:29.269 Total : 9667.86 37.77 0.00 0.00 0.00 0.00 0.00 00:08:29.269 00:08:30.204 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.204 Nvme0n1 : 8.00 9650.00 37.70 0.00 0.00 0.00 0.00 0.00 00:08:30.204 =================================================================================================================== 00:08:30.204 Total : 9650.00 37.70 0.00 0.00 0.00 0.00 0.00 00:08:30.204 00:08:31.140 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.140 Nvme0n1 : 9.00 9614.33 37.56 0.00 0.00 0.00 0.00 0.00 00:08:31.140 =================================================================================================================== 00:08:31.140 Total : 9614.33 37.56 0.00 0.00 0.00 0.00 0.00 00:08:31.140 00:08:32.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.078 Nvme0n1 : 10.00 9601.60 37.51 0.00 0.00 0.00 0.00 0.00 00:08:32.078 =================================================================================================================== 00:08:32.078 Total : 9601.60 37.51 0.00 0.00 0.00 0.00 0.00 00:08:32.078 00:08:32.078 00:08:32.078 Latency(us) 00:08:32.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.078 Nvme0n1 : 10.00 9609.97 37.54 0.00 0.00 13315.93 7474.79 41058.70 00:08:32.078 =================================================================================================================== 00:08:32.078 Total : 9609.97 37.54 0.00 0.00 13315.93 7474.79 41058.70 00:08:32.078 0 00:08:32.078 22:19:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66041 00:08:32.078 22:19:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 66041 ']' 00:08:32.078 22:19:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 66041 00:08:32.078 22:19:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:08:32.078 22:19:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:32.078 22:19:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66041 00:08:32.078 22:19:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:32.078 22:19:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:32.078 22:19:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66041' 00:08:32.078 killing process with pid 66041 00:08:32.078 22:19:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 66041 00:08:32.078 Received shutdown signal, test time was about 10.000000 seconds 00:08:32.078 00:08:32.079 Latency(us) 00:08:32.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.079 =================================================================================================================== 00:08:32.079 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:32.079 22:19:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 66041 00:08:32.337 22:19:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:32.596 22:19:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:32.855 22:19:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61465832-4867-49af-889d-11d6e9d19e3f 00:08:32.855 22:19:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:33.114 22:19:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:33.114 22:19:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:33.114 22:19:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:33.114 [2024-07-15 22:19:46.735610] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:33.372 22:19:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61465832-4867-49af-889d-11d6e9d19e3f 00:08:33.372 22:19:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:08:33.372 22:19:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61465832-4867-49af-889d-11d6e9d19e3f 00:08:33.372 22:19:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:33.372 22:19:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:33.372 22:19:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:33.372 22:19:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:33.372 22:19:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:33.372 22:19:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:33.372 22:19:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:33.372 22:19:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:33.372 22:19:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61465832-4867-49af-889d-11d6e9d19e3f 00:08:33.372 request: 00:08:33.372 { 00:08:33.372 "uuid": "61465832-4867-49af-889d-11d6e9d19e3f", 00:08:33.372 "method": "bdev_lvol_get_lvstores", 00:08:33.372 "req_id": 1 00:08:33.372 } 00:08:33.372 Got JSON-RPC error response 00:08:33.372 response: 00:08:33.372 { 00:08:33.372 "code": -19, 00:08:33.372 "message": "No such device" 00:08:33.372 } 00:08:33.372 22:19:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:08:33.372 22:19:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:33.372 22:19:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:33.372 22:19:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:33.372 22:19:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:33.631 aio_bdev 00:08:33.631 22:19:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fc5df086-e6f7-4517-8b11-f2970100caa6 00:08:33.631 22:19:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=fc5df086-e6f7-4517-8b11-f2970100caa6 00:08:33.631 22:19:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:33.631 22:19:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:08:33.631 22:19:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:33.631 22:19:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:33.631 22:19:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:33.890 22:19:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fc5df086-e6f7-4517-8b11-f2970100caa6 -t 2000 00:08:34.149 [ 00:08:34.149 { 00:08:34.149 "name": "fc5df086-e6f7-4517-8b11-f2970100caa6", 00:08:34.149 "aliases": [ 00:08:34.149 "lvs/lvol" 00:08:34.149 ], 00:08:34.149 "product_name": "Logical Volume", 00:08:34.149 "block_size": 4096, 00:08:34.149 "num_blocks": 38912, 00:08:34.149 "uuid": "fc5df086-e6f7-4517-8b11-f2970100caa6", 00:08:34.149 "assigned_rate_limits": { 00:08:34.149 "rw_ios_per_sec": 0, 00:08:34.149 "rw_mbytes_per_sec": 0, 00:08:34.149 "r_mbytes_per_sec": 0, 00:08:34.149 "w_mbytes_per_sec": 0 00:08:34.149 }, 00:08:34.149 "claimed": false, 00:08:34.149 "zoned": false, 00:08:34.149 "supported_io_types": { 00:08:34.149 "read": true, 00:08:34.149 "write": true, 00:08:34.149 "unmap": true, 00:08:34.149 "flush": false, 00:08:34.149 "reset": true, 00:08:34.149 "nvme_admin": false, 00:08:34.149 "nvme_io": false, 00:08:34.149 "nvme_io_md": false, 00:08:34.149 "write_zeroes": true, 00:08:34.149 "zcopy": false, 00:08:34.149 "get_zone_info": false, 00:08:34.149 "zone_management": false, 00:08:34.149 "zone_append": false, 00:08:34.149 "compare": false, 00:08:34.149 "compare_and_write": false, 00:08:34.149 "abort": false, 00:08:34.149 "seek_hole": true, 00:08:34.149 "seek_data": true, 00:08:34.149 "copy": false, 00:08:34.149 "nvme_iov_md": false 00:08:34.149 }, 00:08:34.149 "driver_specific": { 00:08:34.149 "lvol": { 00:08:34.149 "lvol_store_uuid": "61465832-4867-49af-889d-11d6e9d19e3f", 00:08:34.149 "base_bdev": "aio_bdev", 00:08:34.149 "thin_provision": false, 00:08:34.149 "num_allocated_clusters": 38, 00:08:34.149 "snapshot": false, 00:08:34.149 "clone": false, 00:08:34.149 "esnap_clone": false 00:08:34.149 } 00:08:34.149 } 00:08:34.149 } 00:08:34.149 ] 00:08:34.149 22:19:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:08:34.149 22:19:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61465832-4867-49af-889d-11d6e9d19e3f 00:08:34.149 22:19:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:34.407 22:19:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:34.407 22:19:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61465832-4867-49af-889d-11d6e9d19e3f 00:08:34.407 22:19:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:34.407 22:19:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:34.407 22:19:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete fc5df086-e6f7-4517-8b11-f2970100caa6 00:08:34.665 22:19:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 61465832-4867-49af-889d-11d6e9d19e3f 00:08:34.923 22:19:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:35.205 22:19:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:35.464 ************************************ 00:08:35.464 END TEST lvs_grow_clean 00:08:35.464 ************************************ 00:08:35.464 00:08:35.464 real 0m17.099s 00:08:35.464 user 0m15.186s 00:08:35.464 sys 0m3.088s 00:08:35.464 22:19:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.464 22:19:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:35.722 22:19:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:35.722 22:19:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:35.722 22:19:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:35.722 22:19:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.722 22:19:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:35.722 ************************************ 00:08:35.722 START TEST lvs_grow_dirty 00:08:35.722 ************************************ 00:08:35.722 22:19:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:08:35.722 22:19:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:35.722 22:19:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:35.722 22:19:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:35.722 22:19:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:35.722 22:19:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:35.722 22:19:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:35.722 22:19:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:35.722 22:19:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:35.722 22:19:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:35.979 22:19:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:35.979 22:19:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:35.979 22:19:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4d1ad4fc-4620-4468-9567-ccb32cf709fa 00:08:35.980 22:19:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d1ad4fc-4620-4468-9567-ccb32cf709fa 00:08:35.980 22:19:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:36.237 22:19:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:36.237 22:19:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:36.237 22:19:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4d1ad4fc-4620-4468-9567-ccb32cf709fa lvol 150 00:08:36.514 22:19:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=5febac39-58f0-4ab5-ab8b-67e0d9bcf8c0 00:08:36.514 22:19:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:36.514 22:19:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:36.772 [2024-07-15 22:19:50.187458] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:36.772 [2024-07-15 22:19:50.187561] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:36.772 true 00:08:36.772 22:19:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d1ad4fc-4620-4468-9567-ccb32cf709fa 00:08:36.772 22:19:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:36.772 22:19:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:36.772 22:19:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:37.031 22:19:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5febac39-58f0-4ab5-ab8b-67e0d9bcf8c0 00:08:37.289 22:19:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:37.548 [2024-07-15 22:19:51.018652] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.548 22:19:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:37.806 22:19:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66299 00:08:37.806 22:19:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:37.806 22:19:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:37.806 22:19:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66299 /var/tmp/bdevperf.sock 00:08:37.806 22:19:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66299 ']' 00:08:37.806 22:19:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:37.806 22:19:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:37.806 22:19:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:37.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:37.806 22:19:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:37.806 22:19:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:37.806 [2024-07-15 22:19:51.289057] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:08:37.806 [2024-07-15 22:19:51.289357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66299 ] 00:08:37.806 [2024-07-15 22:19:51.430916] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.065 [2024-07-15 22:19:51.578577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.065 [2024-07-15 22:19:51.651631] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:38.670 22:19:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:38.670 22:19:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:38.670 22:19:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:38.927 Nvme0n1 00:08:38.927 22:19:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:39.186 [ 00:08:39.186 { 00:08:39.186 "name": "Nvme0n1", 00:08:39.186 "aliases": [ 00:08:39.186 "5febac39-58f0-4ab5-ab8b-67e0d9bcf8c0" 00:08:39.186 ], 00:08:39.186 "product_name": "NVMe disk", 00:08:39.186 "block_size": 4096, 00:08:39.186 "num_blocks": 38912, 00:08:39.186 "uuid": "5febac39-58f0-4ab5-ab8b-67e0d9bcf8c0", 00:08:39.186 "assigned_rate_limits": { 00:08:39.186 "rw_ios_per_sec": 0, 00:08:39.186 "rw_mbytes_per_sec": 0, 00:08:39.186 "r_mbytes_per_sec": 0, 00:08:39.186 "w_mbytes_per_sec": 0 00:08:39.186 }, 00:08:39.186 "claimed": false, 00:08:39.186 "zoned": false, 00:08:39.186 "supported_io_types": { 00:08:39.186 "read": true, 00:08:39.186 "write": true, 00:08:39.187 "unmap": true, 00:08:39.187 "flush": true, 00:08:39.187 "reset": true, 00:08:39.187 "nvme_admin": true, 00:08:39.187 "nvme_io": true, 00:08:39.187 "nvme_io_md": false, 00:08:39.187 "write_zeroes": true, 00:08:39.187 "zcopy": false, 00:08:39.187 "get_zone_info": false, 00:08:39.187 "zone_management": false, 00:08:39.187 "zone_append": false, 00:08:39.187 "compare": true, 00:08:39.187 "compare_and_write": true, 00:08:39.187 "abort": true, 00:08:39.187 "seek_hole": false, 00:08:39.187 "seek_data": false, 00:08:39.187 "copy": true, 00:08:39.187 "nvme_iov_md": false 00:08:39.187 }, 00:08:39.187 "memory_domains": [ 00:08:39.187 { 00:08:39.187 "dma_device_id": "system", 00:08:39.187 "dma_device_type": 1 00:08:39.187 } 00:08:39.187 ], 00:08:39.187 "driver_specific": { 00:08:39.187 "nvme": [ 00:08:39.187 { 00:08:39.187 "trid": { 00:08:39.187 "trtype": "TCP", 00:08:39.187 "adrfam": "IPv4", 00:08:39.187 "traddr": "10.0.0.2", 00:08:39.187 "trsvcid": "4420", 00:08:39.187 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:39.187 }, 00:08:39.187 "ctrlr_data": { 00:08:39.187 "cntlid": 1, 00:08:39.187 "vendor_id": "0x8086", 00:08:39.187 "model_number": "SPDK bdev Controller", 00:08:39.187 "serial_number": "SPDK0", 00:08:39.187 "firmware_revision": "24.09", 00:08:39.187 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:39.187 "oacs": { 00:08:39.187 "security": 0, 00:08:39.187 "format": 0, 00:08:39.187 "firmware": 0, 00:08:39.187 "ns_manage": 0 00:08:39.187 }, 00:08:39.187 "multi_ctrlr": true, 00:08:39.187 "ana_reporting": false 00:08:39.187 }, 00:08:39.187 "vs": { 00:08:39.187 "nvme_version": "1.3" 00:08:39.187 }, 00:08:39.187 "ns_data": { 00:08:39.187 "id": 1, 00:08:39.187 "can_share": true 00:08:39.187 } 00:08:39.187 } 00:08:39.187 ], 00:08:39.187 "mp_policy": "active_passive" 00:08:39.187 } 00:08:39.187 } 00:08:39.187 ] 00:08:39.187 22:19:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66322 00:08:39.187 22:19:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:39.187 22:19:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:39.187 Running I/O for 10 seconds... 00:08:40.121 Latency(us) 00:08:40.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.121 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.121 Nvme0n1 : 1.00 10160.00 39.69 0.00 0.00 0.00 0.00 0.00 00:08:40.121 =================================================================================================================== 00:08:40.121 Total : 10160.00 39.69 0.00 0.00 0.00 0.00 0.00 00:08:40.121 00:08:41.056 22:19:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4d1ad4fc-4620-4468-9567-ccb32cf709fa 00:08:41.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.315 Nvme0n1 : 2.00 10159.00 39.68 0.00 0.00 0.00 0.00 0.00 00:08:41.315 =================================================================================================================== 00:08:41.315 Total : 10159.00 39.68 0.00 0.00 0.00 0.00 0.00 00:08:41.315 00:08:41.315 true 00:08:41.315 22:19:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d1ad4fc-4620-4468-9567-ccb32cf709fa 00:08:41.315 22:19:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:41.573 22:19:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:41.573 22:19:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:41.573 22:19:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66322 00:08:42.140 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.140 Nvme0n1 : 3.00 10201.67 39.85 0.00 0.00 0.00 0.00 0.00 00:08:42.140 =================================================================================================================== 00:08:42.140 Total : 10201.67 39.85 0.00 0.00 0.00 0.00 0.00 00:08:42.140 00:08:43.516 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.516 Nvme0n1 : 4.00 10126.75 39.56 0.00 0.00 0.00 0.00 0.00 00:08:43.516 =================================================================================================================== 00:08:43.516 Total : 10126.75 39.56 0.00 0.00 0.00 0.00 0.00 00:08:43.516 00:08:44.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.448 Nvme0n1 : 5.00 10043.20 39.23 0.00 0.00 0.00 0.00 0.00 00:08:44.448 =================================================================================================================== 00:08:44.449 Total : 10043.20 39.23 0.00 0.00 0.00 0.00 0.00 00:08:44.449 00:08:45.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.421 Nvme0n1 : 6.00 9977.83 38.98 0.00 0.00 0.00 0.00 0.00 00:08:45.421 =================================================================================================================== 00:08:45.421 Total : 9977.83 38.98 0.00 0.00 0.00 0.00 0.00 00:08:45.421 00:08:46.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.352 Nvme0n1 : 7.00 9967.57 38.94 0.00 0.00 0.00 0.00 0.00 00:08:46.352 =================================================================================================================== 00:08:46.352 Total : 9967.57 38.94 0.00 0.00 0.00 0.00 0.00 00:08:46.352 00:08:47.287 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.288 Nvme0n1 : 8.00 9573.12 37.40 0.00 0.00 0.00 0.00 0.00 00:08:47.288 =================================================================================================================== 00:08:47.288 Total : 9573.12 37.40 0.00 0.00 0.00 0.00 0.00 00:08:47.288 00:08:48.221 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.221 Nvme0n1 : 9.00 9511.00 37.15 0.00 0.00 0.00 0.00 0.00 00:08:48.221 =================================================================================================================== 00:08:48.221 Total : 9511.00 37.15 0.00 0.00 0.00 0.00 0.00 00:08:48.221 00:08:49.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.156 Nvme0n1 : 10.00 9468.20 36.99 0.00 0.00 0.00 0.00 0.00 00:08:49.156 =================================================================================================================== 00:08:49.156 Total : 9468.20 36.99 0.00 0.00 0.00 0.00 0.00 00:08:49.156 00:08:49.156 00:08:49.156 Latency(us) 00:08:49.156 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.156 Nvme0n1 : 10.00 9477.11 37.02 0.00 0.00 13501.99 7106.31 279620.27 00:08:49.156 =================================================================================================================== 00:08:49.156 Total : 9477.11 37.02 0.00 0.00 13501.99 7106.31 279620.27 00:08:49.156 0 00:08:49.156 22:20:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66299 00:08:49.156 22:20:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 66299 ']' 00:08:49.157 22:20:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 66299 00:08:49.157 22:20:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:08:49.157 22:20:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:49.157 22:20:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66299 00:08:49.415 killing process with pid 66299 00:08:49.415 Received shutdown signal, test time was about 10.000000 seconds 00:08:49.415 00:08:49.415 Latency(us) 00:08:49.415 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.415 =================================================================================================================== 00:08:49.415 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:49.415 22:20:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:49.415 22:20:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:49.415 22:20:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66299' 00:08:49.415 22:20:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 66299 00:08:49.415 22:20:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 66299 00:08:49.674 22:20:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:49.674 22:20:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:49.940 22:20:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d1ad4fc-4620-4468-9567-ccb32cf709fa 00:08:49.940 22:20:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:50.210 22:20:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:50.210 22:20:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:50.210 22:20:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65959 00:08:50.210 22:20:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65959 00:08:50.210 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65959 Killed "${NVMF_APP[@]}" "$@" 00:08:50.210 22:20:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:50.210 22:20:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:50.210 22:20:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:50.210 22:20:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:50.210 22:20:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:50.210 22:20:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=66455 00:08:50.210 22:20:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 66455 00:08:50.210 22:20:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:50.210 22:20:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66455 ']' 00:08:50.210 22:20:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.210 22:20:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:50.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.210 22:20:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.210 22:20:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:50.210 22:20:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:50.210 [2024-07-15 22:20:03.823462] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:08:50.210 [2024-07-15 22:20:03.823541] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.468 [2024-07-15 22:20:03.972928] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.726 [2024-07-15 22:20:04.124837] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.726 [2024-07-15 22:20:04.124922] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.726 [2024-07-15 22:20:04.124940] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.726 [2024-07-15 22:20:04.124954] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.726 [2024-07-15 22:20:04.124965] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.726 [2024-07-15 22:20:04.125008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.726 [2024-07-15 22:20:04.201475] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:51.292 22:20:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:51.292 22:20:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:51.292 22:20:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:51.292 22:20:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:51.292 22:20:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:51.292 22:20:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.292 22:20:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:51.551 [2024-07-15 22:20:04.958152] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:51.551 [2024-07-15 22:20:04.958820] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:51.551 [2024-07-15 22:20:04.959160] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:51.551 22:20:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:51.551 22:20:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 5febac39-58f0-4ab5-ab8b-67e0d9bcf8c0 00:08:51.551 22:20:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=5febac39-58f0-4ab5-ab8b-67e0d9bcf8c0 00:08:51.551 22:20:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:51.551 22:20:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:51.551 22:20:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:51.551 22:20:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:51.551 22:20:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:51.809 22:20:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5febac39-58f0-4ab5-ab8b-67e0d9bcf8c0 -t 2000 00:08:51.809 [ 00:08:51.809 { 00:08:51.809 "name": "5febac39-58f0-4ab5-ab8b-67e0d9bcf8c0", 00:08:51.809 "aliases": [ 00:08:51.809 "lvs/lvol" 00:08:51.809 ], 00:08:51.809 "product_name": "Logical Volume", 00:08:51.809 "block_size": 4096, 00:08:51.809 "num_blocks": 38912, 00:08:51.809 "uuid": "5febac39-58f0-4ab5-ab8b-67e0d9bcf8c0", 00:08:51.809 "assigned_rate_limits": { 00:08:51.809 "rw_ios_per_sec": 0, 00:08:51.809 "rw_mbytes_per_sec": 0, 00:08:51.809 "r_mbytes_per_sec": 0, 00:08:51.809 "w_mbytes_per_sec": 0 00:08:51.809 }, 00:08:51.809 "claimed": false, 00:08:51.809 "zoned": false, 00:08:51.809 "supported_io_types": { 00:08:51.809 "read": true, 00:08:51.809 "write": true, 00:08:51.809 "unmap": true, 00:08:51.809 "flush": false, 00:08:51.809 "reset": true, 00:08:51.809 "nvme_admin": false, 00:08:51.809 "nvme_io": false, 00:08:51.809 "nvme_io_md": false, 00:08:51.809 "write_zeroes": true, 00:08:51.809 "zcopy": false, 00:08:51.809 "get_zone_info": false, 00:08:51.809 "zone_management": false, 00:08:51.809 "zone_append": false, 00:08:51.809 "compare": false, 00:08:51.809 "compare_and_write": false, 00:08:51.809 "abort": false, 00:08:51.809 "seek_hole": true, 00:08:51.809 "seek_data": true, 00:08:51.809 "copy": false, 00:08:51.809 "nvme_iov_md": false 00:08:51.809 }, 00:08:51.809 "driver_specific": { 00:08:51.809 "lvol": { 00:08:51.809 "lvol_store_uuid": "4d1ad4fc-4620-4468-9567-ccb32cf709fa", 00:08:51.809 "base_bdev": "aio_bdev", 00:08:51.809 "thin_provision": false, 00:08:51.809 "num_allocated_clusters": 38, 00:08:51.809 "snapshot": false, 00:08:51.809 "clone": false, 00:08:51.809 "esnap_clone": false 00:08:51.809 } 00:08:51.809 } 00:08:51.810 } 00:08:51.810 ] 00:08:51.810 22:20:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:51.810 22:20:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:51.810 22:20:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d1ad4fc-4620-4468-9567-ccb32cf709fa 00:08:52.068 22:20:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:52.069 22:20:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d1ad4fc-4620-4468-9567-ccb32cf709fa 00:08:52.069 22:20:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:52.326 22:20:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:52.326 22:20:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:52.585 [2024-07-15 22:20:06.005453] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:52.585 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d1ad4fc-4620-4468-9567-ccb32cf709fa 00:08:52.585 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:08:52.585 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d1ad4fc-4620-4468-9567-ccb32cf709fa 00:08:52.585 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:52.585 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:52.585 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:52.585 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:52.585 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:52.585 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:52.585 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:52.585 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:52.585 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d1ad4fc-4620-4468-9567-ccb32cf709fa 00:08:52.843 request: 00:08:52.843 { 00:08:52.843 "uuid": "4d1ad4fc-4620-4468-9567-ccb32cf709fa", 00:08:52.843 "method": "bdev_lvol_get_lvstores", 00:08:52.843 "req_id": 1 00:08:52.843 } 00:08:52.843 Got JSON-RPC error response 00:08:52.843 response: 00:08:52.843 { 00:08:52.843 "code": -19, 00:08:52.843 "message": "No such device" 00:08:52.843 } 00:08:52.843 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:08:52.843 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:52.843 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:52.843 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:52.843 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:52.843 aio_bdev 00:08:52.843 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5febac39-58f0-4ab5-ab8b-67e0d9bcf8c0 00:08:52.843 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=5febac39-58f0-4ab5-ab8b-67e0d9bcf8c0 00:08:52.843 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:52.843 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:52.843 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:52.843 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:52.843 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:53.101 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5febac39-58f0-4ab5-ab8b-67e0d9bcf8c0 -t 2000 00:08:53.360 [ 00:08:53.360 { 00:08:53.360 "name": "5febac39-58f0-4ab5-ab8b-67e0d9bcf8c0", 00:08:53.360 "aliases": [ 00:08:53.360 "lvs/lvol" 00:08:53.360 ], 00:08:53.360 "product_name": "Logical Volume", 00:08:53.360 "block_size": 4096, 00:08:53.360 "num_blocks": 38912, 00:08:53.360 "uuid": "5febac39-58f0-4ab5-ab8b-67e0d9bcf8c0", 00:08:53.360 "assigned_rate_limits": { 00:08:53.360 "rw_ios_per_sec": 0, 00:08:53.360 "rw_mbytes_per_sec": 0, 00:08:53.360 "r_mbytes_per_sec": 0, 00:08:53.360 "w_mbytes_per_sec": 0 00:08:53.360 }, 00:08:53.360 "claimed": false, 00:08:53.360 "zoned": false, 00:08:53.360 "supported_io_types": { 00:08:53.360 "read": true, 00:08:53.360 "write": true, 00:08:53.360 "unmap": true, 00:08:53.360 "flush": false, 00:08:53.360 "reset": true, 00:08:53.360 "nvme_admin": false, 00:08:53.360 "nvme_io": false, 00:08:53.360 "nvme_io_md": false, 00:08:53.360 "write_zeroes": true, 00:08:53.360 "zcopy": false, 00:08:53.360 "get_zone_info": false, 00:08:53.360 "zone_management": false, 00:08:53.360 "zone_append": false, 00:08:53.360 "compare": false, 00:08:53.360 "compare_and_write": false, 00:08:53.360 "abort": false, 00:08:53.360 "seek_hole": true, 00:08:53.360 "seek_data": true, 00:08:53.360 "copy": false, 00:08:53.360 "nvme_iov_md": false 00:08:53.360 }, 00:08:53.360 "driver_specific": { 00:08:53.360 "lvol": { 00:08:53.360 "lvol_store_uuid": "4d1ad4fc-4620-4468-9567-ccb32cf709fa", 00:08:53.360 "base_bdev": "aio_bdev", 00:08:53.360 "thin_provision": false, 00:08:53.360 "num_allocated_clusters": 38, 00:08:53.360 "snapshot": false, 00:08:53.360 "clone": false, 00:08:53.360 "esnap_clone": false 00:08:53.360 } 00:08:53.360 } 00:08:53.360 } 00:08:53.360 ] 00:08:53.360 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:53.360 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d1ad4fc-4620-4468-9567-ccb32cf709fa 00:08:53.360 22:20:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:53.617 22:20:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:53.617 22:20:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d1ad4fc-4620-4468-9567-ccb32cf709fa 00:08:53.617 22:20:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:53.617 22:20:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:53.617 22:20:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5febac39-58f0-4ab5-ab8b-67e0d9bcf8c0 00:08:53.874 22:20:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4d1ad4fc-4620-4468-9567-ccb32cf709fa 00:08:54.153 22:20:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:54.410 22:20:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:54.978 ************************************ 00:08:54.978 END TEST lvs_grow_dirty 00:08:54.978 ************************************ 00:08:54.978 00:08:54.978 real 0m19.149s 00:08:54.978 user 0m38.787s 00:08:54.978 sys 0m7.753s 00:08:54.978 22:20:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:54.978 22:20:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:54.978 22:20:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:54.978 22:20:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:54.978 22:20:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:08:54.978 22:20:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:08:54.978 22:20:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:08:54.978 22:20:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:54.978 22:20:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:08:54.978 22:20:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:08:54.978 22:20:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:08:54.978 22:20:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:54.978 nvmf_trace.0 00:08:54.978 22:20:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:08:54.978 22:20:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:54.978 22:20:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:54.978 22:20:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:54.978 22:20:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:54.978 22:20:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:54.978 22:20:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:54.978 22:20:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:54.978 rmmod nvme_tcp 00:08:54.978 rmmod nvme_fabrics 00:08:54.978 rmmod nvme_keyring 00:08:55.286 22:20:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:55.286 22:20:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:55.286 22:20:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:55.286 22:20:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 66455 ']' 00:08:55.286 22:20:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 66455 00:08:55.286 22:20:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 66455 ']' 00:08:55.286 22:20:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 66455 00:08:55.286 22:20:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:08:55.286 22:20:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:55.286 22:20:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66455 00:08:55.286 22:20:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:55.286 22:20:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:55.286 22:20:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66455' 00:08:55.286 killing process with pid 66455 00:08:55.286 22:20:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 66455 00:08:55.286 22:20:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 66455 00:08:55.544 22:20:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:55.544 22:20:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:55.544 22:20:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:55.544 22:20:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:55.544 22:20:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:55.544 22:20:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.544 22:20:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:55.544 22:20:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.544 22:20:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:55.544 ************************************ 00:08:55.544 END TEST nvmf_lvs_grow 00:08:55.544 ************************************ 00:08:55.544 00:08:55.544 real 0m39.065s 00:08:55.544 user 0m59.727s 00:08:55.544 sys 0m11.799s 00:08:55.544 22:20:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:55.544 22:20:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:55.544 22:20:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:55.544 22:20:09 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:55.544 22:20:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:55.544 22:20:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:55.544 22:20:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:55.544 ************************************ 00:08:55.544 START TEST nvmf_bdev_io_wait 00:08:55.544 ************************************ 00:08:55.545 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:55.803 * Looking for test storage... 00:08:55.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:55.803 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:55.804 Cannot find device "nvmf_tgt_br" 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:55.804 Cannot find device "nvmf_tgt_br2" 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:55.804 Cannot find device "nvmf_tgt_br" 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:55.804 Cannot find device "nvmf_tgt_br2" 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:55.804 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:56.064 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:56.064 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:56.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:56.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:08:56.064 00:08:56.064 --- 10.0.0.2 ping statistics --- 00:08:56.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.064 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:56.064 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:56.064 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:08:56.064 00:08:56.064 --- 10.0.0.3 ping statistics --- 00:08:56.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.064 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:56.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:56.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:08:56.064 00:08:56.064 --- 10.0.0.1 ping statistics --- 00:08:56.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.064 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:56.064 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:56.324 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:56.324 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:56.324 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:56.324 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:56.324 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=66766 00:08:56.324 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:56.324 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 66766 00:08:56.324 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 66766 ']' 00:08:56.324 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.324 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:56.324 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.324 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:56.324 22:20:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:56.324 [2024-07-15 22:20:09.762590] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:08:56.324 [2024-07-15 22:20:09.762689] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.324 [2024-07-15 22:20:09.909098] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:56.582 [2024-07-15 22:20:10.010401] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.582 [2024-07-15 22:20:10.010661] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.582 [2024-07-15 22:20:10.010783] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.582 [2024-07-15 22:20:10.010830] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.582 [2024-07-15 22:20:10.010859] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.582 [2024-07-15 22:20:10.011024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.582 [2024-07-15 22:20:10.011212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.582 [2024-07-15 22:20:10.012071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.582 [2024-07-15 22:20:10.012073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.150 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:57.150 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:08:57.150 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:57.150 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:57.150 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.150 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.150 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:57.150 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.150 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.150 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.150 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:57.150 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.150 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.150 [2024-07-15 22:20:10.753646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:57.150 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.150 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:57.150 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.150 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.150 [2024-07-15 22:20:10.768897] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.150 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.150 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:57.150 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.150 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.408 Malloc0 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.408 [2024-07-15 22:20:10.840105] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66801 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=66803 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:57.408 { 00:08:57.408 "params": { 00:08:57.408 "name": "Nvme$subsystem", 00:08:57.408 "trtype": "$TEST_TRANSPORT", 00:08:57.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:57.408 "adrfam": "ipv4", 00:08:57.408 "trsvcid": "$NVMF_PORT", 00:08:57.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:57.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:57.408 "hdgst": ${hdgst:-false}, 00:08:57.408 "ddgst": ${ddgst:-false} 00:08:57.408 }, 00:08:57.408 "method": "bdev_nvme_attach_controller" 00:08:57.408 } 00:08:57.408 EOF 00:08:57.408 )") 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:57.408 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:57.408 { 00:08:57.408 "params": { 00:08:57.408 "name": "Nvme$subsystem", 00:08:57.408 "trtype": "$TEST_TRANSPORT", 00:08:57.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:57.408 "adrfam": "ipv4", 00:08:57.408 "trsvcid": "$NVMF_PORT", 00:08:57.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:57.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:57.408 "hdgst": ${hdgst:-false}, 00:08:57.409 "ddgst": ${ddgst:-false} 00:08:57.409 }, 00:08:57.409 "method": "bdev_nvme_attach_controller" 00:08:57.409 } 00:08:57.409 EOF 00:08:57.409 )") 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66805 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66809 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:57.409 { 00:08:57.409 "params": { 00:08:57.409 "name": "Nvme$subsystem", 00:08:57.409 "trtype": "$TEST_TRANSPORT", 00:08:57.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:57.409 "adrfam": "ipv4", 00:08:57.409 "trsvcid": "$NVMF_PORT", 00:08:57.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:57.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:57.409 "hdgst": ${hdgst:-false}, 00:08:57.409 "ddgst": ${ddgst:-false} 00:08:57.409 }, 00:08:57.409 "method": "bdev_nvme_attach_controller" 00:08:57.409 } 00:08:57.409 EOF 00:08:57.409 )") 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:57.409 "params": { 00:08:57.409 "name": "Nvme1", 00:08:57.409 "trtype": "tcp", 00:08:57.409 "traddr": "10.0.0.2", 00:08:57.409 "adrfam": "ipv4", 00:08:57.409 "trsvcid": "4420", 00:08:57.409 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:57.409 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:57.409 "hdgst": false, 00:08:57.409 "ddgst": false 00:08:57.409 }, 00:08:57.409 "method": "bdev_nvme_attach_controller" 00:08:57.409 }' 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:57.409 { 00:08:57.409 "params": { 00:08:57.409 "name": "Nvme$subsystem", 00:08:57.409 "trtype": "$TEST_TRANSPORT", 00:08:57.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:57.409 "adrfam": "ipv4", 00:08:57.409 "trsvcid": "$NVMF_PORT", 00:08:57.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:57.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:57.409 "hdgst": ${hdgst:-false}, 00:08:57.409 "ddgst": ${ddgst:-false} 00:08:57.409 }, 00:08:57.409 "method": "bdev_nvme_attach_controller" 00:08:57.409 } 00:08:57.409 EOF 00:08:57.409 )") 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:57.409 "params": { 00:08:57.409 "name": "Nvme1", 00:08:57.409 "trtype": "tcp", 00:08:57.409 "traddr": "10.0.0.2", 00:08:57.409 "adrfam": "ipv4", 00:08:57.409 "trsvcid": "4420", 00:08:57.409 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:57.409 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:57.409 "hdgst": false, 00:08:57.409 "ddgst": false 00:08:57.409 }, 00:08:57.409 "method": "bdev_nvme_attach_controller" 00:08:57.409 }' 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:57.409 "params": { 00:08:57.409 "name": "Nvme1", 00:08:57.409 "trtype": "tcp", 00:08:57.409 "traddr": "10.0.0.2", 00:08:57.409 "adrfam": "ipv4", 00:08:57.409 "trsvcid": "4420", 00:08:57.409 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:57.409 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:57.409 "hdgst": false, 00:08:57.409 "ddgst": false 00:08:57.409 }, 00:08:57.409 "method": "bdev_nvme_attach_controller" 00:08:57.409 }' 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:57.409 "params": { 00:08:57.409 "name": "Nvme1", 00:08:57.409 "trtype": "tcp", 00:08:57.409 "traddr": "10.0.0.2", 00:08:57.409 "adrfam": "ipv4", 00:08:57.409 "trsvcid": "4420", 00:08:57.409 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:57.409 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:57.409 "hdgst": false, 00:08:57.409 "ddgst": false 00:08:57.409 }, 00:08:57.409 "method": "bdev_nvme_attach_controller" 00:08:57.409 }' 00:08:57.409 [2024-07-15 22:20:10.902658] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:08:57.409 [2024-07-15 22:20:10.902930] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:57.409 [2024-07-15 22:20:10.907023] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:08:57.409 [2024-07-15 22:20:10.907227] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:57.409 22:20:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 66801 00:08:57.409 [2024-07-15 22:20:10.913799] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:08:57.409 [2024-07-15 22:20:10.914039] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:57.409 [2024-07-15 22:20:10.918814] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:08:57.409 [2024-07-15 22:20:10.919042] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:57.667 [2024-07-15 22:20:11.109899] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.667 [2024-07-15 22:20:11.194985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:57.667 [2024-07-15 22:20:11.205262] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.667 [2024-07-15 22:20:11.254192] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:57.667 [2024-07-15 22:20:11.282104] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.925 [2024-07-15 22:20:11.306912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:57.925 [2024-07-15 22:20:11.344856] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:57.925 [2024-07-15 22:20:11.347994] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.925 Running I/O for 1 seconds... 00:08:57.925 [2024-07-15 22:20:11.365159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:57.925 [2024-07-15 22:20:11.402905] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:57.925 [2024-07-15 22:20:11.430038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:57.925 Running I/O for 1 seconds... 00:08:57.925 [2024-07-15 22:20:11.467765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:57.925 Running I/O for 1 seconds... 00:08:58.184 Running I/O for 1 seconds... 00:08:58.751 00:08:58.751 Latency(us) 00:08:58.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.751 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:58.751 Nvme1n1 : 1.01 9895.69 38.66 0.00 0.00 12878.97 7158.95 18318.50 00:08:58.751 =================================================================================================================== 00:08:58.751 Total : 9895.69 38.66 0.00 0.00 12878.97 7158.95 18318.50 00:08:59.010 00:08:59.010 Latency(us) 00:08:59.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.010 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:59.010 Nvme1n1 : 1.00 209835.59 819.67 0.00 0.00 607.91 282.94 2395.09 00:08:59.010 =================================================================================================================== 00:08:59.010 Total : 209835.59 819.67 0.00 0.00 607.91 282.94 2395.09 00:08:59.010 00:08:59.010 Latency(us) 00:08:59.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.010 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:59.010 Nvme1n1 : 1.01 6177.74 24.13 0.00 0.00 20570.69 8738.13 32425.84 00:08:59.010 =================================================================================================================== 00:08:59.010 Total : 6177.74 24.13 0.00 0.00 20570.69 8738.13 32425.84 00:08:59.010 00:08:59.010 Latency(us) 00:08:59.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.010 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:59.010 Nvme1n1 : 1.01 5086.14 19.87 0.00 0.00 25036.12 10001.48 39795.35 00:08:59.010 =================================================================================================================== 00:08:59.010 Total : 5086.14 19.87 0.00 0.00 25036.12 10001.48 39795.35 00:08:59.268 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 66803 00:08:59.268 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 66805 00:08:59.268 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 66809 00:08:59.268 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:59.268 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.268 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:59.268 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.268 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:59.268 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:59.268 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:59.268 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:59.268 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:59.268 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:59.268 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:59.268 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:59.268 rmmod nvme_tcp 00:08:59.268 rmmod nvme_fabrics 00:08:59.268 rmmod nvme_keyring 00:08:59.268 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:59.268 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:59.268 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:59.268 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 66766 ']' 00:08:59.268 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 66766 00:08:59.268 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 66766 ']' 00:08:59.268 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 66766 00:08:59.268 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:08:59.268 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:59.268 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66766 00:08:59.528 killing process with pid 66766 00:08:59.528 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:59.528 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:59.528 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66766' 00:08:59.528 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 66766 00:08:59.528 22:20:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 66766 00:08:59.790 22:20:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:59.790 22:20:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:59.790 22:20:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:59.790 22:20:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:59.790 22:20:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:59.790 22:20:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.790 22:20:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:59.790 22:20:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.790 22:20:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:59.790 ************************************ 00:08:59.790 END TEST nvmf_bdev_io_wait 00:08:59.790 ************************************ 00:08:59.790 00:08:59.790 real 0m4.150s 00:08:59.790 user 0m17.308s 00:08:59.790 sys 0m2.344s 00:08:59.790 22:20:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:59.790 22:20:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:59.790 22:20:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:59.790 22:20:13 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:59.790 22:20:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:59.790 22:20:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.790 22:20:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:59.790 ************************************ 00:08:59.790 START TEST nvmf_queue_depth 00:08:59.790 ************************************ 00:08:59.790 22:20:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:00.049 * Looking for test storage... 00:09:00.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:00.049 Cannot find device "nvmf_tgt_br" 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:00.049 Cannot find device "nvmf_tgt_br2" 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:00.049 Cannot find device "nvmf_tgt_br" 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:00.049 Cannot find device "nvmf_tgt_br2" 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:00.049 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:00.306 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:00.306 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:00.306 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:00.307 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:00.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:09:00.307 00:09:00.307 --- 10.0.0.2 ping statistics --- 00:09:00.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.307 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:09:00.307 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:00.307 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:00.307 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:09:00.307 00:09:00.307 --- 10.0.0.3 ping statistics --- 00:09:00.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.307 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:09:00.307 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:00.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:09:00.567 00:09:00.567 --- 10.0.0.1 ping statistics --- 00:09:00.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.567 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:00.567 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.567 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:09:00.567 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:00.567 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.567 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:00.567 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:00.567 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.567 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:00.567 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:00.567 22:20:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:00.567 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:00.567 22:20:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:00.567 22:20:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.567 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=67041 00:09:00.567 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:00.567 22:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 67041 00:09:00.567 22:20:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 67041 ']' 00:09:00.567 22:20:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.567 22:20:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:00.567 22:20:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.567 22:20:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:00.567 22:20:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.567 [2024-07-15 22:20:14.024405] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:09:00.567 [2024-07-15 22:20:14.024482] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.567 [2024-07-15 22:20:14.185004] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.828 [2024-07-15 22:20:14.285430] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.828 [2024-07-15 22:20:14.285482] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.828 [2024-07-15 22:20:14.285493] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.828 [2024-07-15 22:20:14.285501] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.828 [2024-07-15 22:20:14.285508] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.828 [2024-07-15 22:20:14.285550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.828 [2024-07-15 22:20:14.327941] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:01.394 22:20:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:01.394 22:20:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:01.394 22:20:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:01.394 22:20:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:01.394 22:20:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.394 22:20:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.394 22:20:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:01.394 22:20:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.394 22:20:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.394 [2024-07-15 22:20:14.940513] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.394 22:20:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.394 22:20:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:01.394 22:20:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.394 22:20:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.394 Malloc0 00:09:01.394 22:20:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.394 22:20:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:01.394 22:20:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.394 22:20:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.394 22:20:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.394 22:20:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:01.394 22:20:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.394 22:20:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.394 22:20:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.394 22:20:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:01.394 22:20:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.394 22:20:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.394 [2024-07-15 22:20:15.007592] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.394 22:20:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.394 22:20:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=67073 00:09:01.394 22:20:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:01.394 22:20:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:01.394 22:20:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 67073 /var/tmp/bdevperf.sock 00:09:01.394 22:20:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 67073 ']' 00:09:01.394 22:20:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:01.394 22:20:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:01.394 22:20:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:01.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:01.394 22:20:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:01.394 22:20:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.653 [2024-07-15 22:20:15.064533] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:09:01.654 [2024-07-15 22:20:15.064834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67073 ] 00:09:01.654 [2024-07-15 22:20:15.208313] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.912 [2024-07-15 22:20:15.311432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.912 [2024-07-15 22:20:15.354392] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:02.480 22:20:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:02.480 22:20:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:02.480 22:20:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:02.480 22:20:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.480 22:20:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.480 NVMe0n1 00:09:02.480 22:20:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.480 22:20:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:02.480 Running I/O for 10 seconds... 00:09:12.536 00:09:12.536 Latency(us) 00:09:12.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.536 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:12.536 Verification LBA range: start 0x0 length 0x4000 00:09:12.536 NVMe0n1 : 10.07 10894.62 42.56 0.00 0.00 93609.21 20318.79 74116.22 00:09:12.536 =================================================================================================================== 00:09:12.536 Total : 10894.62 42.56 0.00 0.00 93609.21 20318.79 74116.22 00:09:12.536 0 00:09:12.794 22:20:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 67073 00:09:12.794 22:20:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 67073 ']' 00:09:12.794 22:20:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 67073 00:09:12.794 22:20:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:12.794 22:20:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:12.794 22:20:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67073 00:09:12.794 killing process with pid 67073 00:09:12.794 Received shutdown signal, test time was about 10.000000 seconds 00:09:12.794 00:09:12.794 Latency(us) 00:09:12.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.794 =================================================================================================================== 00:09:12.795 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:12.795 22:20:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:12.795 22:20:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:12.795 22:20:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67073' 00:09:12.795 22:20:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 67073 00:09:12.795 22:20:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 67073 00:09:13.054 22:20:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:13.054 22:20:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:13.054 22:20:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:13.054 22:20:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:13.054 22:20:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:13.054 22:20:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:13.054 22:20:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:13.054 22:20:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:13.054 rmmod nvme_tcp 00:09:13.054 rmmod nvme_fabrics 00:09:13.054 rmmod nvme_keyring 00:09:13.054 22:20:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:13.054 22:20:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:13.055 22:20:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:13.055 22:20:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 67041 ']' 00:09:13.055 22:20:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 67041 00:09:13.055 22:20:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 67041 ']' 00:09:13.055 22:20:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 67041 00:09:13.055 22:20:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:13.055 22:20:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:13.055 22:20:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67041 00:09:13.055 killing process with pid 67041 00:09:13.055 22:20:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:13.055 22:20:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:13.055 22:20:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67041' 00:09:13.055 22:20:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 67041 00:09:13.055 22:20:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 67041 00:09:13.313 22:20:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:13.314 22:20:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:13.314 22:20:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:13.314 22:20:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:13.314 22:20:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:13.314 22:20:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.314 22:20:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:13.314 22:20:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.314 22:20:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:13.314 ************************************ 00:09:13.314 END TEST nvmf_queue_depth 00:09:13.314 ************************************ 00:09:13.314 00:09:13.314 real 0m13.590s 00:09:13.314 user 0m22.675s 00:09:13.314 sys 0m2.778s 00:09:13.314 22:20:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:13.314 22:20:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.572 22:20:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:13.572 22:20:26 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:13.572 22:20:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:13.572 22:20:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.572 22:20:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:13.572 ************************************ 00:09:13.572 START TEST nvmf_target_multipath 00:09:13.572 ************************************ 00:09:13.572 22:20:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:13.572 * Looking for test storage... 00:09:13.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:13.572 22:20:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:13.572 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:13.572 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.572 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.572 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.572 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.572 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.572 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.572 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.572 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.572 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.572 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.572 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:09:13.572 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:09:13.572 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.572 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.572 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:13.572 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.572 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:13.572 22:20:27 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.572 22:20:27 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.572 22:20:27 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.572 22:20:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.572 22:20:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:13.573 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:13.832 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:13.832 Cannot find device "nvmf_tgt_br" 00:09:13.832 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:09:13.832 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:13.832 Cannot find device "nvmf_tgt_br2" 00:09:13.832 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:09:13.832 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:13.832 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:13.832 Cannot find device "nvmf_tgt_br" 00:09:13.832 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:09:13.832 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:13.832 Cannot find device "nvmf_tgt_br2" 00:09:13.832 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:09:13.832 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:13.832 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:13.832 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:13.832 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:13.832 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:13.832 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:13.832 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:13.832 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:13.832 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:13.832 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:13.832 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:13.832 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:13.832 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:13.832 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:13.832 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:13.832 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:13.832 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:13.832 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:14.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:09:14.091 00:09:14.091 --- 10.0.0.2 ping statistics --- 00:09:14.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.091 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:14.091 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:14.091 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.137 ms 00:09:14.091 00:09:14.091 --- 10.0.0.3 ping statistics --- 00:09:14.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.091 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:14.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:09:14.091 00:09:14.091 --- 10.0.0.1 ping statistics --- 00:09:14.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.091 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=67394 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:14.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 67394 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 67394 ']' 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:14.091 22:20:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:14.091 [2024-07-15 22:20:27.693925] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:09:14.091 [2024-07-15 22:20:27.694007] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.350 [2024-07-15 22:20:27.840158] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:14.350 [2024-07-15 22:20:27.978654] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.350 [2024-07-15 22:20:27.978946] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.350 [2024-07-15 22:20:27.979092] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.350 [2024-07-15 22:20:27.979143] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.350 [2024-07-15 22:20:27.979171] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.350 [2024-07-15 22:20:27.979329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.350 [2024-07-15 22:20:27.980158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.350 [2024-07-15 22:20:27.980302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:14.350 [2024-07-15 22:20:27.980309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.607 [2024-07-15 22:20:28.057627] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:15.216 22:20:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.216 22:20:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:09:15.216 22:20:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:15.216 22:20:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:15.216 22:20:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:15.216 22:20:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.216 22:20:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:15.216 [2024-07-15 22:20:28.811545] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.473 22:20:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:15.473 Malloc0 00:09:15.731 22:20:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:15.731 22:20:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:15.990 22:20:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:16.247 [2024-07-15 22:20:29.747820] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.247 22:20:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:16.505 [2024-07-15 22:20:29.939826] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:16.505 22:20:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid=37374fe9-a847-4b40-94af-b766955abedc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:16.505 22:20:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid=37374fe9-a847-4b40-94af-b766955abedc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:16.763 22:20:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:16.763 22:20:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:09:16.763 22:20:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:16.763 22:20:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:16.763 22:20:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67484 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:18.661 22:20:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:18.919 [global] 00:09:18.919 thread=1 00:09:18.919 invalidate=1 00:09:18.919 rw=randrw 00:09:18.919 time_based=1 00:09:18.919 runtime=6 00:09:18.919 ioengine=libaio 00:09:18.919 direct=1 00:09:18.919 bs=4096 00:09:18.919 iodepth=128 00:09:18.919 norandommap=0 00:09:18.919 numjobs=1 00:09:18.919 00:09:18.919 verify_dump=1 00:09:18.919 verify_backlog=512 00:09:18.919 verify_state_save=0 00:09:18.919 do_verify=1 00:09:18.919 verify=crc32c-intel 00:09:18.919 [job0] 00:09:18.919 filename=/dev/nvme0n1 00:09:18.919 Could not set queue depth (nvme0n1) 00:09:18.919 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:18.919 fio-3.35 00:09:18.919 Starting 1 thread 00:09:19.867 22:20:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:19.867 22:20:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:20.128 22:20:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:20.128 22:20:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:20.128 22:20:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:20.128 22:20:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:20.128 22:20:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:20.128 22:20:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:20.128 22:20:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:20.128 22:20:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:20.128 22:20:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:20.128 22:20:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:20.128 22:20:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:20.128 22:20:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:20.128 22:20:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:20.385 22:20:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:20.643 22:20:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:20.643 22:20:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:20.643 22:20:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:20.643 22:20:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:20.643 22:20:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:20.643 22:20:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:20.643 22:20:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:20.643 22:20:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:20.643 22:20:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:20.643 22:20:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:20.643 22:20:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:20.643 22:20:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:20.643 22:20:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67484 00:09:25.911 00:09:25.911 job0: (groupid=0, jobs=1): err= 0: pid=67505: Mon Jul 15 22:20:38 2024 00:09:25.911 read: IOPS=11.2k, BW=43.9MiB/s (46.0MB/s)(264MiB/6002msec) 00:09:25.911 slat (usec): min=5, max=8713, avg=47.47, stdev=181.39 00:09:25.911 clat (usec): min=1580, max=25185, avg=7700.13, stdev=1661.11 00:09:25.911 lat (usec): min=1599, max=25200, avg=7747.60, stdev=1667.99 00:09:25.911 clat percentiles (usec): 00:09:25.911 | 1.00th=[ 4424], 5.00th=[ 5407], 10.00th=[ 6194], 20.00th=[ 6783], 00:09:25.911 | 30.00th=[ 7046], 40.00th=[ 7242], 50.00th=[ 7439], 60.00th=[ 7701], 00:09:25.911 | 70.00th=[ 7963], 80.00th=[ 8356], 90.00th=[ 9503], 95.00th=[11207], 00:09:25.911 | 99.00th=[12780], 99.50th=[14222], 99.90th=[20055], 99.95th=[24511], 00:09:25.911 | 99.99th=[25035] 00:09:25.911 bw ( KiB/s): min=11704, max=28384, per=53.34%, avg=23979.73, stdev=4517.54, samples=11 00:09:25.911 iops : min= 2926, max= 7096, avg=5994.91, stdev=1129.39, samples=11 00:09:25.911 write: IOPS=6576, BW=25.7MiB/s (26.9MB/s)(141MiB/5486msec); 0 zone resets 00:09:25.911 slat (usec): min=12, max=8242, avg=63.00, stdev=121.36 00:09:25.911 clat (usec): min=1086, max=24659, avg=6688.13, stdev=1579.92 00:09:25.911 lat (usec): min=1114, max=24696, avg=6751.13, stdev=1586.67 00:09:25.911 clat percentiles (usec): 00:09:25.911 | 1.00th=[ 3851], 5.00th=[ 4555], 10.00th=[ 4948], 20.00th=[ 5669], 00:09:25.911 | 30.00th=[ 6128], 40.00th=[ 6456], 50.00th=[ 6652], 60.00th=[ 6915], 00:09:25.911 | 70.00th=[ 7111], 80.00th=[ 7373], 90.00th=[ 7898], 95.00th=[ 8717], 00:09:25.911 | 99.00th=[12387], 99.50th=[14746], 99.90th=[23462], 99.95th=[23987], 00:09:25.911 | 99.99th=[24511] 00:09:25.911 bw ( KiB/s): min=12288, max=27736, per=91.22%, avg=23994.00, stdev=4264.28, samples=11 00:09:25.911 iops : min= 3072, max= 6934, avg=5998.45, stdev=1066.06, samples=11 00:09:25.911 lat (msec) : 2=0.05%, 4=0.77%, 10=92.59%, 20=6.48%, 50=0.11% 00:09:25.911 cpu : usr=7.52%, sys=30.46%, ctx=6341, majf=0, minf=96 00:09:25.911 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:25.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:25.911 issued rwts: total=67461,36077,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.911 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:25.911 00:09:25.911 Run status group 0 (all jobs): 00:09:25.911 READ: bw=43.9MiB/s (46.0MB/s), 43.9MiB/s-43.9MiB/s (46.0MB/s-46.0MB/s), io=264MiB (276MB), run=6002-6002msec 00:09:25.911 WRITE: bw=25.7MiB/s (26.9MB/s), 25.7MiB/s-25.7MiB/s (26.9MB/s-26.9MB/s), io=141MiB (148MB), run=5486-5486msec 00:09:25.911 00:09:25.911 Disk stats (read/write): 00:09:25.911 nvme0n1: ios=66329/35556, merge=0/0, ticks=472400/213113, in_queue=685513, util=98.61% 00:09:25.911 22:20:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:25.911 22:20:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:25.911 22:20:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:25.911 22:20:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:25.911 22:20:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:25.911 22:20:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:25.911 22:20:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:25.911 22:20:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:25.911 22:20:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:25.911 22:20:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:25.911 22:20:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:25.911 22:20:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:25.911 22:20:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:25.911 22:20:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:25.911 22:20:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:25.911 22:20:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=67583 00:09:25.911 22:20:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:25.911 22:20:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:25.911 [global] 00:09:25.911 thread=1 00:09:25.911 invalidate=1 00:09:25.911 rw=randrw 00:09:25.911 time_based=1 00:09:25.911 runtime=6 00:09:25.911 ioengine=libaio 00:09:25.911 direct=1 00:09:25.911 bs=4096 00:09:25.911 iodepth=128 00:09:25.911 norandommap=0 00:09:25.911 numjobs=1 00:09:25.911 00:09:25.911 verify_dump=1 00:09:25.911 verify_backlog=512 00:09:25.911 verify_state_save=0 00:09:25.911 do_verify=1 00:09:25.911 verify=crc32c-intel 00:09:25.911 [job0] 00:09:25.911 filename=/dev/nvme0n1 00:09:25.911 Could not set queue depth (nvme0n1) 00:09:25.911 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:25.911 fio-3.35 00:09:25.911 Starting 1 thread 00:09:26.846 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:26.846 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:27.104 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:27.104 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:27.104 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:27.105 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:27.105 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:27.105 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:27.105 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:27.105 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:27.105 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:27.105 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:27.105 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:27.105 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:27.105 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:27.363 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:27.363 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:27.363 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:27.363 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:27.363 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:27.363 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:27.363 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:27.363 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:27.363 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:27.363 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:27.363 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:27.363 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:27.363 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:27.363 22:20:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 67583 00:09:32.625 00:09:32.625 job0: (groupid=0, jobs=1): err= 0: pid=67609: Mon Jul 15 22:20:45 2024 00:09:32.625 read: IOPS=10.9k, BW=42.7MiB/s (44.7MB/s)(256MiB/6005msec) 00:09:32.625 slat (usec): min=4, max=6111, avg=43.31, stdev=168.53 00:09:32.625 clat (usec): min=291, max=27170, avg=7937.58, stdev=3195.38 00:09:32.625 lat (usec): min=313, max=27187, avg=7980.89, stdev=3197.18 00:09:32.625 clat percentiles (usec): 00:09:32.625 | 1.00th=[ 979], 5.00th=[ 1909], 10.00th=[ 4948], 20.00th=[ 6652], 00:09:32.625 | 30.00th=[ 7111], 40.00th=[ 7373], 50.00th=[ 7635], 60.00th=[ 7832], 00:09:32.625 | 70.00th=[ 8160], 80.00th=[ 8717], 90.00th=[11600], 95.00th=[15401], 00:09:32.625 | 99.00th=[18220], 99.50th=[19006], 99.90th=[20579], 99.95th=[21103], 00:09:32.625 | 99.99th=[25822] 00:09:32.625 bw ( KiB/s): min= 7696, max=28792, per=52.96%, avg=23140.64, stdev=6987.57, samples=11 00:09:32.625 iops : min= 1924, max= 7198, avg=5785.09, stdev=1746.87, samples=11 00:09:32.625 write: IOPS=6540, BW=25.5MiB/s (26.8MB/s)(138MiB/5407msec); 0 zone resets 00:09:32.625 slat (usec): min=5, max=1771, avg=58.64, stdev=102.44 00:09:32.625 clat (usec): min=302, max=27175, avg=6812.25, stdev=2894.78 00:09:32.625 lat (usec): min=344, max=27222, avg=6870.89, stdev=2894.91 00:09:32.625 clat percentiles (usec): 00:09:32.625 | 1.00th=[ 848], 5.00th=[ 1483], 10.00th=[ 3720], 20.00th=[ 5276], 00:09:32.625 | 30.00th=[ 6063], 40.00th=[ 6456], 50.00th=[ 6783], 60.00th=[ 7046], 00:09:32.625 | 70.00th=[ 7308], 80.00th=[ 7701], 90.00th=[ 9503], 95.00th=[13698], 00:09:32.625 | 99.00th=[15795], 99.50th=[16450], 99.90th=[19530], 99.95th=[23725], 00:09:32.625 | 99.99th=[26870] 00:09:32.625 bw ( KiB/s): min= 8152, max=29768, per=88.50%, avg=23154.73, stdev=6904.36, samples=11 00:09:32.625 iops : min= 2038, max= 7442, avg=5788.55, stdev=1726.06, samples=11 00:09:32.625 lat (usec) : 500=0.07%, 750=0.42%, 1000=0.84% 00:09:32.625 lat (msec) : 2=4.41%, 4=3.20%, 10=78.20%, 20=12.67%, 50=0.18% 00:09:32.625 cpu : usr=6.96%, sys=30.12%, ctx=7277, majf=0, minf=96 00:09:32.625 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:32.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.625 issued rwts: total=65598,35365,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.625 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.625 00:09:32.625 Run status group 0 (all jobs): 00:09:32.625 READ: bw=42.7MiB/s (44.7MB/s), 42.7MiB/s-42.7MiB/s (44.7MB/s-44.7MB/s), io=256MiB (269MB), run=6005-6005msec 00:09:32.625 WRITE: bw=25.5MiB/s (26.8MB/s), 25.5MiB/s-25.5MiB/s (26.8MB/s-26.8MB/s), io=138MiB (145MB), run=5407-5407msec 00:09:32.625 00:09:32.625 Disk stats (read/write): 00:09:32.625 nvme0n1: ios=64828/34374, merge=0/0, ticks=479732/213365, in_queue=693097, util=98.75% 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:32.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:32.625 rmmod nvme_tcp 00:09:32.625 rmmod nvme_fabrics 00:09:32.625 rmmod nvme_keyring 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 67394 ']' 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 67394 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 67394 ']' 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 67394 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67394 00:09:32.625 killing process with pid 67394 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67394' 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 67394 00:09:32.625 22:20:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 67394 00:09:32.884 22:20:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:32.884 22:20:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:32.884 22:20:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:32.884 22:20:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:32.884 22:20:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:32.884 22:20:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.884 22:20:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:32.884 22:20:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.884 22:20:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:32.884 00:09:32.884 real 0m19.348s 00:09:32.884 user 1m11.841s 00:09:32.884 sys 0m10.016s 00:09:32.884 22:20:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:32.884 ************************************ 00:09:32.884 END TEST nvmf_target_multipath 00:09:32.884 22:20:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:32.884 ************************************ 00:09:32.884 22:20:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:32.884 22:20:46 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:32.884 22:20:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:32.884 22:20:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:32.884 22:20:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:32.884 ************************************ 00:09:32.884 START TEST nvmf_zcopy 00:09:32.884 ************************************ 00:09:32.884 22:20:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:33.142 * Looking for test storage... 00:09:33.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:33.142 22:20:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:33.142 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:33.142 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.142 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.142 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.142 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.142 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.142 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.142 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.142 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.142 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.142 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.142 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:33.143 Cannot find device "nvmf_tgt_br" 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:33.143 Cannot find device "nvmf_tgt_br2" 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:33.143 Cannot find device "nvmf_tgt_br" 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:33.143 Cannot find device "nvmf_tgt_br2" 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:33.143 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:33.401 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:33.401 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:33.401 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:33.401 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:33.401 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:33.401 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:33.401 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:33.401 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:33.401 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:33.401 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:33.401 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:33.401 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:33.401 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:33.401 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:33.401 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:33.401 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:33.401 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:33.401 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:33.401 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:33.401 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:33.401 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:33.401 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:33.401 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:33.401 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:33.402 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:33.402 22:20:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:33.402 22:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:33.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:09:33.660 00:09:33.660 --- 10.0.0.2 ping statistics --- 00:09:33.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.660 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:33.660 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:33.660 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:09:33.660 00:09:33.660 --- 10.0.0.3 ping statistics --- 00:09:33.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.660 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:33.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:09:33.660 00:09:33.660 --- 10.0.0.1 ping statistics --- 00:09:33.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.660 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=67860 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 67860 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 67860 ']' 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:33.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:33.660 22:20:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.660 [2024-07-15 22:20:47.203211] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:09:33.660 [2024-07-15 22:20:47.203294] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.918 [2024-07-15 22:20:47.346773] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.918 [2024-07-15 22:20:47.444759] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.918 [2024-07-15 22:20:47.444805] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.918 [2024-07-15 22:20:47.444816] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.918 [2024-07-15 22:20:47.444824] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.918 [2024-07-15 22:20:47.444831] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.918 [2024-07-15 22:20:47.444861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.918 [2024-07-15 22:20:47.486150] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:34.485 22:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:34.485 22:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:09:34.485 22:20:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:34.485 22:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:34.485 22:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.485 22:20:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.485 22:20:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:34.485 22:20:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:34.485 22:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.485 22:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.485 [2024-07-15 22:20:48.078429] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.485 22:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.485 22:20:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:34.485 22:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.485 22:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.485 22:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.485 22:20:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:34.485 22:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.485 22:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.485 [2024-07-15 22:20:48.102507] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:34.485 22:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.485 22:20:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:34.485 22:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.485 22:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.744 22:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.744 22:20:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:34.744 22:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.744 22:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.744 malloc0 00:09:34.744 22:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.744 22:20:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:34.744 22:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.744 22:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.744 22:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.744 22:20:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:34.744 22:20:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:34.744 22:20:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:34.744 22:20:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:34.744 22:20:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:34.744 22:20:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:34.744 { 00:09:34.744 "params": { 00:09:34.744 "name": "Nvme$subsystem", 00:09:34.744 "trtype": "$TEST_TRANSPORT", 00:09:34.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.744 "adrfam": "ipv4", 00:09:34.744 "trsvcid": "$NVMF_PORT", 00:09:34.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.744 "hdgst": ${hdgst:-false}, 00:09:34.744 "ddgst": ${ddgst:-false} 00:09:34.744 }, 00:09:34.744 "method": "bdev_nvme_attach_controller" 00:09:34.744 } 00:09:34.744 EOF 00:09:34.744 )") 00:09:34.744 22:20:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:34.744 22:20:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:34.744 22:20:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:34.744 22:20:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:34.744 "params": { 00:09:34.744 "name": "Nvme1", 00:09:34.744 "trtype": "tcp", 00:09:34.744 "traddr": "10.0.0.2", 00:09:34.744 "adrfam": "ipv4", 00:09:34.744 "trsvcid": "4420", 00:09:34.744 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.744 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.744 "hdgst": false, 00:09:34.744 "ddgst": false 00:09:34.744 }, 00:09:34.744 "method": "bdev_nvme_attach_controller" 00:09:34.744 }' 00:09:34.744 [2024-07-15 22:20:48.201138] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:09:34.744 [2024-07-15 22:20:48.201206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67889 ] 00:09:34.744 [2024-07-15 22:20:48.345518] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.002 [2024-07-15 22:20:48.444849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.002 [2024-07-15 22:20:48.494404] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:35.002 Running I/O for 10 seconds... 00:09:47.186 00:09:47.186 Latency(us) 00:09:47.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.186 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:47.186 Verification LBA range: start 0x0 length 0x1000 00:09:47.186 Nvme1n1 : 10.01 7659.61 59.84 0.00 0.00 16666.02 2368.77 26740.79 00:09:47.186 =================================================================================================================== 00:09:47.186 Total : 7659.61 59.84 0.00 0.00 16666.02 2368.77 26740.79 00:09:47.186 22:20:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=68005 00:09:47.186 22:20:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:47.186 22:20:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:47.186 22:20:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:47.186 22:20:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:47.186 22:20:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:47.186 22:20:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:47.186 22:20:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:47.186 { 00:09:47.186 "params": { 00:09:47.186 "name": "Nvme$subsystem", 00:09:47.186 "trtype": "$TEST_TRANSPORT", 00:09:47.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:47.186 "adrfam": "ipv4", 00:09:47.186 "trsvcid": "$NVMF_PORT", 00:09:47.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:47.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:47.186 "hdgst": ${hdgst:-false}, 00:09:47.186 "ddgst": ${ddgst:-false} 00:09:47.186 }, 00:09:47.186 "method": "bdev_nvme_attach_controller" 00:09:47.186 } 00:09:47.186 EOF 00:09:47.186 )") 00:09:47.186 22:20:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.186 22:20:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:47.186 [2024-07-15 22:20:58.806462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.186 [2024-07-15 22:20:58.806508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.186 22:20:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:47.186 22:20:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:47.186 22:20:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:47.186 "params": { 00:09:47.186 "name": "Nvme1", 00:09:47.186 "trtype": "tcp", 00:09:47.186 "traddr": "10.0.0.2", 00:09:47.186 "adrfam": "ipv4", 00:09:47.186 "trsvcid": "4420", 00:09:47.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:47.186 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:47.186 "hdgst": false, 00:09:47.186 "ddgst": false 00:09:47.186 }, 00:09:47.186 "method": "bdev_nvme_attach_controller" 00:09:47.186 }' 00:09:47.186 [2024-07-15 22:20:58.822433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.186 [2024-07-15 22:20:58.822464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.186 [2024-07-15 22:20:58.842390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.186 [2024-07-15 22:20:58.842428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.186 [2024-07-15 22:20:58.854335] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:09:47.186 [2024-07-15 22:20:58.854367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.186 [2024-07-15 22:20:58.854395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.186 [2024-07-15 22:20:58.854426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68005 ] 00:09:47.186 [2024-07-15 22:20:58.866357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.186 [2024-07-15 22:20:58.866388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.186 [2024-07-15 22:20:58.882338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.186 [2024-07-15 22:20:58.882385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.186 [2024-07-15 22:20:58.898307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.186 [2024-07-15 22:20:58.898336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.186 [2024-07-15 22:20:58.914290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.186 [2024-07-15 22:20:58.914321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.186 [2024-07-15 22:20:58.930271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.186 [2024-07-15 22:20:58.930307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.186 [2024-07-15 22:20:58.946242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.186 [2024-07-15 22:20:58.946274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.186 [2024-07-15 22:20:58.962223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.186 [2024-07-15 22:20:58.962252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.186 [2024-07-15 22:20:58.978192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.186 [2024-07-15 22:20:58.978221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.186 [2024-07-15 22:20:58.994185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.186 [2024-07-15 22:20:58.994217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.186 [2024-07-15 22:20:59.001516] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.186 [2024-07-15 22:20:59.010152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.186 [2024-07-15 22:20:59.010185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.186 [2024-07-15 22:20:59.026140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.186 [2024-07-15 22:20:59.026174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.186 [2024-07-15 22:20:59.042105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.186 [2024-07-15 22:20:59.042137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.186 [2024-07-15 22:20:59.054090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.186 [2024-07-15 22:20:59.054116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.186 [2024-07-15 22:20:59.066074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.186 [2024-07-15 22:20:59.066102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.186 [2024-07-15 22:20:59.078060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.186 [2024-07-15 22:20:59.078094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.090046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.090079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.101642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.187 [2024-07-15 22:20:59.102021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.102039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.114019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.114058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.125994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.126024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.137972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.138002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.149952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.149979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.151844] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:47.187 [2024-07-15 22:20:59.161950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.161984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.173924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.173951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.185906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.185933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.197932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.197974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.209917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.209957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.221906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.221958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.233903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.233945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.245879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.245914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.257883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.257926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 Running I/O for 5 seconds... 00:09:47.187 [2024-07-15 22:20:59.269875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.269914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.285900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.285948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.304350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.304398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.322832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.322886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.339680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.339732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.351054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.351101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.366594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.366652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.382715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.382766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.397974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.398023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.417440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.417490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.435888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.435942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.451036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.451083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.470798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.470847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.488023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.488073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.505510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.505563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.520858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.520905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.539962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.540013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.558047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.558096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.575944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.575991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.591395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.591446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.606217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.606265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.621437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.621485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.637263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.637323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.652183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.652233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.667796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.667841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.682883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.682926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.698482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.698526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.713383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.713428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.729152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.729197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.744151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.744194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.759943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.759993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.775385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.775434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.790708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.790753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.805241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.805303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.816403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.816448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.831839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.831887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.847709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.187 [2024-07-15 22:20:59.847755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.187 [2024-07-15 22:20:59.865657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:20:59.865704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:20:59.880855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:20:59.880899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:20:59.899642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:20:59.899686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:20:59.917736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:20:59.917785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:20:59.932723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:20:59.932773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:20:59.952436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:20:59.952488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:20:59.970521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:20:59.970572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:20:59.985616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:20:59.985661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.005645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.005716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.023605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.023651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.041845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.041891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.059624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.059691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.077431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.077483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.092438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.092484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.111825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.111875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.129726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.129772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.144593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.144647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.160562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.160619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.176767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.176817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.192560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.192617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.210046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.210091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.224826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.224871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.235932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.235972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.251199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.251240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.270840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.270884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.288619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.288668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.306782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.306830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.322672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.322720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.342066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.342109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.356477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.356529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.371912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.371962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.391218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.391268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.409825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.409872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.426021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.426068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.443300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.443345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.461835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.461881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.479824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.479873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.498460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.498507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.518354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.518405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.536790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.536837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.551147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.551194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.566114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.566159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.582692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.582744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.598898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.598945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.615216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.615268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.633208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.633257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.651449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.651495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.669346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.669391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.684840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.684885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.700237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.700282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.718466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.718513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.736381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.736429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.751298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.751341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.770499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.770552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.188 [2024-07-15 22:21:00.788657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.188 [2024-07-15 22:21:00.788722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.189 [2024-07-15 22:21:00.807177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.189 [2024-07-15 22:21:00.807225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.447 [2024-07-15 22:21:00.825463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.447 [2024-07-15 22:21:00.825506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.447 [2024-07-15 22:21:00.841170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.447 [2024-07-15 22:21:00.841216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.447 [2024-07-15 22:21:00.860566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.447 [2024-07-15 22:21:00.860623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.447 [2024-07-15 22:21:00.876020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.447 [2024-07-15 22:21:00.876066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.447 [2024-07-15 22:21:00.895338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.447 [2024-07-15 22:21:00.895384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.447 [2024-07-15 22:21:00.910224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.447 [2024-07-15 22:21:00.910263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.447 [2024-07-15 22:21:00.926452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.447 [2024-07-15 22:21:00.926495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.447 [2024-07-15 22:21:00.946506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.447 [2024-07-15 22:21:00.946551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.447 [2024-07-15 22:21:00.964430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.447 [2024-07-15 22:21:00.964475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.448 [2024-07-15 22:21:00.982407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.448 [2024-07-15 22:21:00.982449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.448 [2024-07-15 22:21:00.997534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.448 [2024-07-15 22:21:00.997579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.448 [2024-07-15 22:21:01.016747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.448 [2024-07-15 22:21:01.016790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.448 [2024-07-15 22:21:01.034468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.448 [2024-07-15 22:21:01.034514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.448 [2024-07-15 22:21:01.049430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.448 [2024-07-15 22:21:01.049472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.448 [2024-07-15 22:21:01.068840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.448 [2024-07-15 22:21:01.068885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.705 [2024-07-15 22:21:01.086397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.705 [2024-07-15 22:21:01.086444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.705 [2024-07-15 22:21:01.101437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.705 [2024-07-15 22:21:01.101483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.705 [2024-07-15 22:21:01.117120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.705 [2024-07-15 22:21:01.117165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.705 [2024-07-15 22:21:01.134760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.705 [2024-07-15 22:21:01.134806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.705 [2024-07-15 22:21:01.152685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.705 [2024-07-15 22:21:01.152735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.705 [2024-07-15 22:21:01.171219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.705 [2024-07-15 22:21:01.171264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.705 [2024-07-15 22:21:01.186250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.705 [2024-07-15 22:21:01.186294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.705 [2024-07-15 22:21:01.205245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.705 [2024-07-15 22:21:01.205303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.705 [2024-07-15 22:21:01.220487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.705 [2024-07-15 22:21:01.220532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.705 [2024-07-15 22:21:01.239563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.706 [2024-07-15 22:21:01.239621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.706 [2024-07-15 22:21:01.254308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.706 [2024-07-15 22:21:01.254355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.706 [2024-07-15 22:21:01.265170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.706 [2024-07-15 22:21:01.265213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.706 [2024-07-15 22:21:01.280646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.706 [2024-07-15 22:21:01.280692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.706 [2024-07-15 22:21:01.299472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.706 [2024-07-15 22:21:01.299519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.706 [2024-07-15 22:21:01.316623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.706 [2024-07-15 22:21:01.316670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.706 [2024-07-15 22:21:01.334944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.706 [2024-07-15 22:21:01.334995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.963 [2024-07-15 22:21:01.353167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.963 [2024-07-15 22:21:01.353212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.963 [2024-07-15 22:21:01.371181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.963 [2024-07-15 22:21:01.371229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.963 [2024-07-15 22:21:01.386476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.963 [2024-07-15 22:21:01.386524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.963 [2024-07-15 22:21:01.405893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.963 [2024-07-15 22:21:01.405939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.963 [2024-07-15 22:21:01.421851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.963 [2024-07-15 22:21:01.421897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.963 [2024-07-15 22:21:01.440552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.963 [2024-07-15 22:21:01.440611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.963 [2024-07-15 22:21:01.456340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.963 [2024-07-15 22:21:01.456383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.963 [2024-07-15 22:21:01.475681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.963 [2024-07-15 22:21:01.475734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.963 [2024-07-15 22:21:01.493772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.964 [2024-07-15 22:21:01.493820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.964 [2024-07-15 22:21:01.509238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.964 [2024-07-15 22:21:01.509295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.964 [2024-07-15 22:21:01.525333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.964 [2024-07-15 22:21:01.525387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.964 [2024-07-15 22:21:01.540501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.964 [2024-07-15 22:21:01.540552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.964 [2024-07-15 22:21:01.560872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.964 [2024-07-15 22:21:01.560921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.964 [2024-07-15 22:21:01.576708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.964 [2024-07-15 22:21:01.576753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.964 [2024-07-15 22:21:01.594946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.964 [2024-07-15 22:21:01.594993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.222 [2024-07-15 22:21:01.612719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.222 [2024-07-15 22:21:01.612768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.222 [2024-07-15 22:21:01.631149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.222 [2024-07-15 22:21:01.631198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.222 [2024-07-15 22:21:01.646052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.222 [2024-07-15 22:21:01.646102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.222 [2024-07-15 22:21:01.662497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.222 [2024-07-15 22:21:01.662546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.222 [2024-07-15 22:21:01.681771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.222 [2024-07-15 22:21:01.681815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.222 [2024-07-15 22:21:01.697081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.222 [2024-07-15 22:21:01.697130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.222 [2024-07-15 22:21:01.712690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.222 [2024-07-15 22:21:01.712733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.222 [2024-07-15 22:21:01.726179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.222 [2024-07-15 22:21:01.726226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.222 [2024-07-15 22:21:01.741572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.222 [2024-07-15 22:21:01.741636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.222 [2024-07-15 22:21:01.756914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.222 [2024-07-15 22:21:01.756960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.222 [2024-07-15 22:21:01.775079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.222 [2024-07-15 22:21:01.775128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.222 [2024-07-15 22:21:01.793109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.222 [2024-07-15 22:21:01.793155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.222 [2024-07-15 22:21:01.810802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.222 [2024-07-15 22:21:01.810846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.222 [2024-07-15 22:21:01.828472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.222 [2024-07-15 22:21:01.828516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.222 [2024-07-15 22:21:01.843891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.222 [2024-07-15 22:21:01.843932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-07-15 22:21:01.863173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-07-15 22:21:01.863216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-07-15 22:21:01.878046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-07-15 22:21:01.878088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-07-15 22:21:01.893765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-07-15 22:21:01.893810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-07-15 22:21:01.911759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-07-15 22:21:01.911803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-07-15 22:21:01.927391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-07-15 22:21:01.927433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-07-15 22:21:01.942502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-07-15 22:21:01.942545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-07-15 22:21:01.960671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-07-15 22:21:01.960716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-07-15 22:21:01.974883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-07-15 22:21:01.974925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-07-15 22:21:01.993010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-07-15 22:21:01.993056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-07-15 22:21:02.010877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-07-15 22:21:02.010925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-07-15 22:21:02.029355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-07-15 22:21:02.029410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-07-15 22:21:02.045264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-07-15 22:21:02.045338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-07-15 22:21:02.059901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-07-15 22:21:02.059946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-07-15 22:21:02.079600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-07-15 22:21:02.079662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-07-15 22:21:02.097847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-07-15 22:21:02.097893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.739 [2024-07-15 22:21:02.115879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.739 [2024-07-15 22:21:02.115927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.739 [2024-07-15 22:21:02.133926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.739 [2024-07-15 22:21:02.133973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.739 [2024-07-15 22:21:02.149355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.739 [2024-07-15 22:21:02.149401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.739 [2024-07-15 22:21:02.168327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.739 [2024-07-15 22:21:02.168377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.739 [2024-07-15 22:21:02.183381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.739 [2024-07-15 22:21:02.183428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.739 [2024-07-15 22:21:02.202828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.739 [2024-07-15 22:21:02.202876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.739 [2024-07-15 22:21:02.218258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.739 [2024-07-15 22:21:02.218306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.739 [2024-07-15 22:21:02.237676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.739 [2024-07-15 22:21:02.237729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.739 [2024-07-15 22:21:02.255834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.739 [2024-07-15 22:21:02.255882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.739 [2024-07-15 22:21:02.271247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.739 [2024-07-15 22:21:02.271293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.739 [2024-07-15 22:21:02.290697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.739 [2024-07-15 22:21:02.290747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.739 [2024-07-15 22:21:02.306246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.739 [2024-07-15 22:21:02.306293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.739 [2024-07-15 22:21:02.321496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.739 [2024-07-15 22:21:02.321542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.739 [2024-07-15 22:21:02.339649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.739 [2024-07-15 22:21:02.339693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.739 [2024-07-15 22:21:02.357650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.739 [2024-07-15 22:21:02.357699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.997 [2024-07-15 22:21:02.375402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.997 [2024-07-15 22:21:02.375451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.997 [2024-07-15 22:21:02.394247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.997 [2024-07-15 22:21:02.394301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.997 [2024-07-15 22:21:02.412954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.997 [2024-07-15 22:21:02.413006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.997 [2024-07-15 22:21:02.429627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.997 [2024-07-15 22:21:02.429680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.997 [2024-07-15 22:21:02.449231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.997 [2024-07-15 22:21:02.449286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.997 [2024-07-15 22:21:02.465869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.997 [2024-07-15 22:21:02.465917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.997 [2024-07-15 22:21:02.485225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.997 [2024-07-15 22:21:02.485286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.997 [2024-07-15 22:21:02.500779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.997 [2024-07-15 22:21:02.500825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.997 [2024-07-15 22:21:02.515886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.997 [2024-07-15 22:21:02.515930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.997 [2024-07-15 22:21:02.530401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.997 [2024-07-15 22:21:02.530445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.997 [2024-07-15 22:21:02.542030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.997 [2024-07-15 22:21:02.542075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.997 [2024-07-15 22:21:02.557455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.997 [2024-07-15 22:21:02.557498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.997 [2024-07-15 22:21:02.573328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.997 [2024-07-15 22:21:02.573368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.997 [2024-07-15 22:21:02.589062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.997 [2024-07-15 22:21:02.589115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.997 [2024-07-15 22:21:02.607032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.997 [2024-07-15 22:21:02.607102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.997 [2024-07-15 22:21:02.625526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.997 [2024-07-15 22:21:02.625576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.255 [2024-07-15 22:21:02.643909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.255 [2024-07-15 22:21:02.643965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.255 [2024-07-15 22:21:02.659452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.255 [2024-07-15 22:21:02.659504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.255 [2024-07-15 22:21:02.677152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.255 [2024-07-15 22:21:02.677203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.255 [2024-07-15 22:21:02.692779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.255 [2024-07-15 22:21:02.692828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.255 [2024-07-15 22:21:02.711227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.255 [2024-07-15 22:21:02.711281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.255 [2024-07-15 22:21:02.726768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.255 [2024-07-15 22:21:02.726819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.255 [2024-07-15 22:21:02.745903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.255 [2024-07-15 22:21:02.745952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.255 [2024-07-15 22:21:02.763724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.255 [2024-07-15 22:21:02.763789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.255 [2024-07-15 22:21:02.778911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.255 [2024-07-15 22:21:02.778956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.255 [2024-07-15 22:21:02.794070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.255 [2024-07-15 22:21:02.794111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.255 [2024-07-15 22:21:02.808418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.255 [2024-07-15 22:21:02.808462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.255 [2024-07-15 22:21:02.824636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.255 [2024-07-15 22:21:02.824685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.255 [2024-07-15 22:21:02.842839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.255 [2024-07-15 22:21:02.842885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.255 [2024-07-15 22:21:02.860735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.255 [2024-07-15 22:21:02.860778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.255 [2024-07-15 22:21:02.878678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.255 [2024-07-15 22:21:02.878721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.513 [2024-07-15 22:21:02.894533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.513 [2024-07-15 22:21:02.894579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.513 [2024-07-15 22:21:02.912510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.513 [2024-07-15 22:21:02.912555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.513 [2024-07-15 22:21:02.930930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.513 [2024-07-15 22:21:02.930975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.513 [2024-07-15 22:21:02.946591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.513 [2024-07-15 22:21:02.946652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.513 [2024-07-15 22:21:02.966286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.513 [2024-07-15 22:21:02.966328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.513 [2024-07-15 22:21:02.981969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.513 [2024-07-15 22:21:02.982010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.513 [2024-07-15 22:21:03.001098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.513 [2024-07-15 22:21:03.001142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.513 [2024-07-15 22:21:03.019730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.513 [2024-07-15 22:21:03.019777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.513 [2024-07-15 22:21:03.038018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.513 [2024-07-15 22:21:03.038065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.513 [2024-07-15 22:21:03.056317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.513 [2024-07-15 22:21:03.056364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.513 [2024-07-15 22:21:03.074653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.513 [2024-07-15 22:21:03.074697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.513 [2024-07-15 22:21:03.092442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.513 [2024-07-15 22:21:03.092488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.513 [2024-07-15 22:21:03.111664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.513 [2024-07-15 22:21:03.111710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.513 [2024-07-15 22:21:03.129684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.513 [2024-07-15 22:21:03.129731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.513 [2024-07-15 22:21:03.145235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.513 [2024-07-15 22:21:03.145293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-07-15 22:21:03.164823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-07-15 22:21:03.164875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-07-15 22:21:03.183637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-07-15 22:21:03.183687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-07-15 22:21:03.203326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-07-15 22:21:03.203376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-07-15 22:21:03.221611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-07-15 22:21:03.221674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-07-15 22:21:03.240227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-07-15 22:21:03.240278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-07-15 22:21:03.258817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-07-15 22:21:03.258869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-07-15 22:21:03.276788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-07-15 22:21:03.276838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-07-15 22:21:03.295044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-07-15 22:21:03.295093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-07-15 22:21:03.313071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-07-15 22:21:03.313118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-07-15 22:21:03.328785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-07-15 22:21:03.328830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-07-15 22:21:03.347368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-07-15 22:21:03.347415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-07-15 22:21:03.366124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-07-15 22:21:03.366172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-07-15 22:21:03.384915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-07-15 22:21:03.384958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-07-15 22:21:03.402838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-07-15 22:21:03.402890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.026 [2024-07-15 22:21:03.420632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.026 [2024-07-15 22:21:03.420683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.026 [2024-07-15 22:21:03.435664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.026 [2024-07-15 22:21:03.435712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.026 [2024-07-15 22:21:03.455546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.026 [2024-07-15 22:21:03.455613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.026 [2024-07-15 22:21:03.474072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.026 [2024-07-15 22:21:03.474122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.026 [2024-07-15 22:21:03.492546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.026 [2024-07-15 22:21:03.492606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.026 [2024-07-15 22:21:03.511269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.026 [2024-07-15 22:21:03.511319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.026 [2024-07-15 22:21:03.528259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.026 [2024-07-15 22:21:03.528310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.026 [2024-07-15 22:21:03.547511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.026 [2024-07-15 22:21:03.547557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.026 [2024-07-15 22:21:03.565828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.026 [2024-07-15 22:21:03.565876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.027 [2024-07-15 22:21:03.584093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.027 [2024-07-15 22:21:03.584140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.027 [2024-07-15 22:21:03.598854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.027 [2024-07-15 22:21:03.598902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.027 [2024-07-15 22:21:03.615298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.027 [2024-07-15 22:21:03.615343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.027 [2024-07-15 22:21:03.626400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.027 [2024-07-15 22:21:03.626459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.027 [2024-07-15 22:21:03.642337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.027 [2024-07-15 22:21:03.642398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.027 [2024-07-15 22:21:03.657867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.027 [2024-07-15 22:21:03.657915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.283 [2024-07-15 22:21:03.669098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.283 [2024-07-15 22:21:03.669141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.283 [2024-07-15 22:21:03.684431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.283 [2024-07-15 22:21:03.684480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.283 [2024-07-15 22:21:03.700590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.283 [2024-07-15 22:21:03.700652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.283 [2024-07-15 22:21:03.716652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.284 [2024-07-15 22:21:03.716706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.284 [2024-07-15 22:21:03.734628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.284 [2024-07-15 22:21:03.734678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.284 [2024-07-15 22:21:03.749925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.284 [2024-07-15 22:21:03.749971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.284 [2024-07-15 22:21:03.765754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.284 [2024-07-15 22:21:03.765798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.284 [2024-07-15 22:21:03.780230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.284 [2024-07-15 22:21:03.780272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.284 [2024-07-15 22:21:03.796816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.284 [2024-07-15 22:21:03.796859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.284 [2024-07-15 22:21:03.813291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.284 [2024-07-15 22:21:03.813360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.284 [2024-07-15 22:21:03.829315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.284 [2024-07-15 22:21:03.829359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.284 [2024-07-15 22:21:03.847319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.284 [2024-07-15 22:21:03.847365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.284 [2024-07-15 22:21:03.865047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.284 [2024-07-15 22:21:03.865090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.284 [2024-07-15 22:21:03.882421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.284 [2024-07-15 22:21:03.882479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.284 [2024-07-15 22:21:03.901072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.284 [2024-07-15 22:21:03.901119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.542 [2024-07-15 22:21:03.917413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.542 [2024-07-15 22:21:03.917463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.542 [2024-07-15 22:21:03.933603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.542 [2024-07-15 22:21:03.933658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.542 [2024-07-15 22:21:03.951558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.542 [2024-07-15 22:21:03.951617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.542 [2024-07-15 22:21:03.969856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.542 [2024-07-15 22:21:03.969903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.542 [2024-07-15 22:21:03.984631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.542 [2024-07-15 22:21:03.984675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.542 [2024-07-15 22:21:04.000910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.542 [2024-07-15 22:21:04.000954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.542 [2024-07-15 22:21:04.017463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.542 [2024-07-15 22:21:04.017527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.542 [2024-07-15 22:21:04.037088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.542 [2024-07-15 22:21:04.037138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.542 [2024-07-15 22:21:04.055607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.542 [2024-07-15 22:21:04.055658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.542 [2024-07-15 22:21:04.073611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.542 [2024-07-15 22:21:04.073663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.542 [2024-07-15 22:21:04.091451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.542 [2024-07-15 22:21:04.091502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.542 [2024-07-15 22:21:04.106035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.542 [2024-07-15 22:21:04.106086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.542 [2024-07-15 22:21:04.117459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.542 [2024-07-15 22:21:04.117505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.542 [2024-07-15 22:21:04.136009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.542 [2024-07-15 22:21:04.136057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.542 [2024-07-15 22:21:04.154125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.542 [2024-07-15 22:21:04.154175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.542 [2024-07-15 22:21:04.169750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.542 [2024-07-15 22:21:04.169798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.801 [2024-07-15 22:21:04.188862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.801 [2024-07-15 22:21:04.188910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.801 [2024-07-15 22:21:04.206725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.801 [2024-07-15 22:21:04.206768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.801 [2024-07-15 22:21:04.224659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.801 [2024-07-15 22:21:04.224711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.801 [2024-07-15 22:21:04.240090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.801 [2024-07-15 22:21:04.240135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.801 [2024-07-15 22:21:04.256256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.801 [2024-07-15 22:21:04.256302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.801 00:09:50.801 Latency(us) 00:09:50.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:50.801 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:50.801 Nvme1n1 : 5.01 15545.20 121.45 0.00 0.00 8226.84 3447.88 18739.61 00:09:50.801 =================================================================================================================== 00:09:50.801 Total : 15545.20 121.45 0.00 0.00 8226.84 3447.88 18739.61 00:09:50.801 [2024-07-15 22:21:04.264376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.801 [2024-07-15 22:21:04.264411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.801 [2024-07-15 22:21:04.276350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.801 [2024-07-15 22:21:04.276385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.801 [2024-07-15 22:21:04.292325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.801 [2024-07-15 22:21:04.292355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.801 [2024-07-15 22:21:04.308302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.801 [2024-07-15 22:21:04.308333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.801 [2024-07-15 22:21:04.324273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.801 [2024-07-15 22:21:04.324305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.801 [2024-07-15 22:21:04.340252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.801 [2024-07-15 22:21:04.340286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.801 [2024-07-15 22:21:04.356229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.801 [2024-07-15 22:21:04.356269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.801 [2024-07-15 22:21:04.372208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.801 [2024-07-15 22:21:04.372241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.801 [2024-07-15 22:21:04.388186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.801 [2024-07-15 22:21:04.388219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.801 [2024-07-15 22:21:04.404157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.801 [2024-07-15 22:21:04.404186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.801 [2024-07-15 22:21:04.420138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.801 [2024-07-15 22:21:04.420167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.058 [2024-07-15 22:21:04.436107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.058 [2024-07-15 22:21:04.436135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.058 [2024-07-15 22:21:04.452082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.058 [2024-07-15 22:21:04.452111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.058 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (68005) - No such process 00:09:51.058 22:21:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 68005 00:09:51.058 22:21:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.058 22:21:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.058 22:21:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.058 22:21:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.058 22:21:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:51.058 22:21:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.058 22:21:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.058 delay0 00:09:51.058 22:21:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.058 22:21:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:51.058 22:21:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.058 22:21:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.058 22:21:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.058 22:21:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:51.058 [2024-07-15 22:21:04.673479] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:57.633 Initializing NVMe Controllers 00:09:57.633 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:57.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:57.633 Initialization complete. Launching workers. 00:09:57.633 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 56 00:09:57.633 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 343, failed to submit 33 00:09:57.633 success 199, unsuccess 144, failed 0 00:09:57.633 22:21:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:57.633 22:21:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:57.633 22:21:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:57.633 22:21:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:57.633 22:21:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:57.633 22:21:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:57.633 22:21:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:57.633 22:21:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:57.633 rmmod nvme_tcp 00:09:57.633 rmmod nvme_fabrics 00:09:57.633 rmmod nvme_keyring 00:09:57.633 22:21:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:57.633 22:21:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:57.633 22:21:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:57.633 22:21:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 67860 ']' 00:09:57.633 22:21:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 67860 00:09:57.633 22:21:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 67860 ']' 00:09:57.633 22:21:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 67860 00:09:57.633 22:21:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:09:57.633 22:21:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:57.633 22:21:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67860 00:09:57.633 22:21:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:57.633 22:21:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:57.633 killing process with pid 67860 00:09:57.633 22:21:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67860' 00:09:57.633 22:21:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 67860 00:09:57.633 22:21:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 67860 00:09:57.633 22:21:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:57.633 22:21:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:57.633 22:21:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:57.633 22:21:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:57.633 22:21:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:57.633 22:21:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.633 22:21:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:57.633 22:21:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.633 22:21:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:57.633 00:09:57.633 real 0m24.824s 00:09:57.633 user 0m39.091s 00:09:57.633 sys 0m8.667s 00:09:57.892 22:21:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:57.892 22:21:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:57.892 ************************************ 00:09:57.892 END TEST nvmf_zcopy 00:09:57.892 ************************************ 00:09:57.892 22:21:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:57.892 22:21:11 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:57.892 22:21:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:57.892 22:21:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.892 22:21:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:57.892 ************************************ 00:09:57.892 START TEST nvmf_nmic 00:09:57.892 ************************************ 00:09:57.892 22:21:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:57.892 * Looking for test storage... 00:09:57.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:57.892 22:21:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:57.892 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:57.893 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:58.152 Cannot find device "nvmf_tgt_br" 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:58.152 Cannot find device "nvmf_tgt_br2" 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:58.152 Cannot find device "nvmf_tgt_br" 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:58.152 Cannot find device "nvmf_tgt_br2" 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:58.152 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:58.152 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:58.152 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:58.411 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:58.411 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:58.411 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:58.411 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:58.411 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:58.411 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:58.411 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:58.411 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:58.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:09:58.412 00:09:58.412 --- 10.0.0.2 ping statistics --- 00:09:58.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.412 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:09:58.412 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:58.412 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:58.412 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:09:58.412 00:09:58.412 --- 10.0.0.3 ping statistics --- 00:09:58.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.412 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:09:58.412 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:58.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:09:58.412 00:09:58.412 --- 10.0.0.1 ping statistics --- 00:09:58.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.412 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:09:58.412 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.412 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:09:58.412 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:58.412 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.412 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:58.412 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:58.412 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.412 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:58.412 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:58.412 22:21:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:58.412 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:58.412 22:21:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:58.412 22:21:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.412 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=68330 00:09:58.412 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:58.412 22:21:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 68330 00:09:58.412 22:21:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 68330 ']' 00:09:58.412 22:21:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.412 22:21:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:58.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.412 22:21:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.412 22:21:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:58.412 22:21:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.412 [2024-07-15 22:21:11.991127] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:09:58.412 [2024-07-15 22:21:11.991196] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.670 [2024-07-15 22:21:12.136373] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:58.671 [2024-07-15 22:21:12.237242] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.671 [2024-07-15 22:21:12.237302] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.671 [2024-07-15 22:21:12.237311] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.671 [2024-07-15 22:21:12.237319] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.671 [2024-07-15 22:21:12.237326] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.671 [2024-07-15 22:21:12.237455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.671 [2024-07-15 22:21:12.238267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:58.671 [2024-07-15 22:21:12.238317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:58.671 [2024-07-15 22:21:12.238321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.671 [2024-07-15 22:21:12.281090] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:59.238 22:21:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:59.238 22:21:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:09:59.238 22:21:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:59.238 22:21:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:59.238 22:21:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.497 [2024-07-15 22:21:12.897956] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.497 Malloc0 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.497 [2024-07-15 22:21:12.973367] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:59.497 test case1: single bdev can't be used in multiple subsystems 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.497 22:21:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.497 22:21:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.497 22:21:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:59.497 22:21:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:59.497 22:21:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.497 22:21:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.497 [2024-07-15 22:21:13.009150] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:59.497 [2024-07-15 22:21:13.009190] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:59.497 [2024-07-15 22:21:13.009201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.498 request: 00:09:59.498 { 00:09:59.498 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:59.498 "namespace": { 00:09:59.498 "bdev_name": "Malloc0", 00:09:59.498 "no_auto_visible": false 00:09:59.498 }, 00:09:59.498 "method": "nvmf_subsystem_add_ns", 00:09:59.498 "req_id": 1 00:09:59.498 } 00:09:59.498 Got JSON-RPC error response 00:09:59.498 response: 00:09:59.498 { 00:09:59.498 "code": -32602, 00:09:59.498 "message": "Invalid parameters" 00:09:59.498 } 00:09:59.498 22:21:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:59.498 22:21:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:59.498 22:21:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:59.498 Adding namespace failed - expected result. 00:09:59.498 22:21:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:59.498 test case2: host connect to nvmf target in multiple paths 00:09:59.498 22:21:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:59.498 22:21:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:59.498 22:21:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.498 22:21:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.498 [2024-07-15 22:21:13.025253] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:59.498 22:21:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.498 22:21:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid=37374fe9-a847-4b40-94af-b766955abedc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:59.756 22:21:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid=37374fe9-a847-4b40-94af-b766955abedc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:59.756 22:21:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:59.756 22:21:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:59.756 22:21:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:59.756 22:21:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:59.756 22:21:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:02.290 22:21:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:02.290 22:21:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:02.290 22:21:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:02.290 22:21:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:02.290 22:21:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:02.290 22:21:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:02.290 22:21:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:02.290 [global] 00:10:02.290 thread=1 00:10:02.290 invalidate=1 00:10:02.290 rw=write 00:10:02.290 time_based=1 00:10:02.290 runtime=1 00:10:02.290 ioengine=libaio 00:10:02.290 direct=1 00:10:02.290 bs=4096 00:10:02.290 iodepth=1 00:10:02.290 norandommap=0 00:10:02.290 numjobs=1 00:10:02.290 00:10:02.290 verify_dump=1 00:10:02.290 verify_backlog=512 00:10:02.290 verify_state_save=0 00:10:02.290 do_verify=1 00:10:02.290 verify=crc32c-intel 00:10:02.290 [job0] 00:10:02.290 filename=/dev/nvme0n1 00:10:02.290 Could not set queue depth (nvme0n1) 00:10:02.290 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.290 fio-3.35 00:10:02.290 Starting 1 thread 00:10:03.225 00:10:03.225 job0: (groupid=0, jobs=1): err= 0: pid=68422: Mon Jul 15 22:21:16 2024 00:10:03.225 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:10:03.225 slat (nsec): min=7899, max=27978, avg=9883.23, stdev=1835.56 00:10:03.225 clat (usec): min=110, max=489, avg=156.09, stdev=26.91 00:10:03.225 lat (usec): min=118, max=500, avg=165.98, stdev=26.90 00:10:03.225 clat percentiles (usec): 00:10:03.225 | 1.00th=[ 118], 5.00th=[ 124], 10.00th=[ 128], 20.00th=[ 135], 00:10:03.225 | 30.00th=[ 141], 40.00th=[ 147], 50.00th=[ 153], 60.00th=[ 157], 00:10:03.225 | 70.00th=[ 163], 80.00th=[ 176], 90.00th=[ 190], 95.00th=[ 206], 00:10:03.225 | 99.00th=[ 235], 99.50th=[ 253], 99.90th=[ 367], 99.95th=[ 465], 00:10:03.225 | 99.99th=[ 490] 00:10:03.225 write: IOPS=3688, BW=14.4MiB/s (15.1MB/s)(14.4MiB/1001msec); 0 zone resets 00:10:03.225 slat (usec): min=10, max=197, avg=15.46, stdev= 6.75 00:10:03.225 clat (usec): min=61, max=768, avg=92.26, stdev=20.81 00:10:03.225 lat (usec): min=73, max=781, avg=107.72, stdev=22.61 00:10:03.225 clat percentiles (usec): 00:10:03.225 | 1.00th=[ 70], 5.00th=[ 74], 10.00th=[ 76], 20.00th=[ 80], 00:10:03.225 | 30.00th=[ 84], 40.00th=[ 87], 50.00th=[ 90], 60.00th=[ 93], 00:10:03.225 | 70.00th=[ 97], 80.00th=[ 102], 90.00th=[ 112], 95.00th=[ 119], 00:10:03.225 | 99.00th=[ 135], 99.50th=[ 143], 99.90th=[ 293], 99.95th=[ 461], 00:10:03.225 | 99.99th=[ 766] 00:10:03.225 bw ( KiB/s): min=16384, max=16384, per=100.00%, avg=16384.00, stdev= 0.00, samples=1 00:10:03.225 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:03.225 lat (usec) : 100=39.03%, 250=60.58%, 500=0.37%, 1000=0.01% 00:10:03.225 cpu : usr=2.00%, sys=7.10%, ctx=7276, majf=0, minf=2 00:10:03.225 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.225 issued rwts: total=3584,3692,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.225 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.225 00:10:03.225 Run status group 0 (all jobs): 00:10:03.225 READ: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:10:03.225 WRITE: bw=14.4MiB/s (15.1MB/s), 14.4MiB/s-14.4MiB/s (15.1MB/s-15.1MB/s), io=14.4MiB (15.1MB), run=1001-1001msec 00:10:03.225 00:10:03.225 Disk stats (read/write): 00:10:03.225 nvme0n1: ios=3122/3486, merge=0/0, ticks=505/350, in_queue=855, util=91.58% 00:10:03.225 22:21:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:03.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:03.225 22:21:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:03.225 22:21:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:03.225 22:21:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:03.225 22:21:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.225 22:21:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:03.225 22:21:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.225 22:21:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:03.225 22:21:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:03.225 22:21:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:03.225 22:21:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:03.225 22:21:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:03.484 22:21:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:03.484 22:21:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:03.484 22:21:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:03.484 22:21:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:03.484 rmmod nvme_tcp 00:10:03.484 rmmod nvme_fabrics 00:10:03.484 rmmod nvme_keyring 00:10:03.484 22:21:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:03.484 22:21:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:03.484 22:21:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:03.484 22:21:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 68330 ']' 00:10:03.484 22:21:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 68330 00:10:03.484 22:21:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 68330 ']' 00:10:03.484 22:21:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 68330 00:10:03.484 22:21:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:10:03.484 22:21:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:03.484 22:21:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68330 00:10:03.484 killing process with pid 68330 00:10:03.484 22:21:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:03.484 22:21:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:03.484 22:21:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68330' 00:10:03.484 22:21:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 68330 00:10:03.484 22:21:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 68330 00:10:03.742 22:21:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:03.742 22:21:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:03.742 22:21:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:03.742 22:21:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:03.742 22:21:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:03.742 22:21:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.742 22:21:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:03.742 22:21:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.742 22:21:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:03.742 00:10:03.742 real 0m5.910s 00:10:03.742 user 0m18.097s 00:10:03.742 sys 0m2.763s 00:10:03.742 ************************************ 00:10:03.742 END TEST nvmf_nmic 00:10:03.742 ************************************ 00:10:03.742 22:21:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:03.742 22:21:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.742 22:21:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:03.742 22:21:17 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:03.742 22:21:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:03.742 22:21:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:03.742 22:21:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:03.742 ************************************ 00:10:03.742 START TEST nvmf_fio_target 00:10:03.742 ************************************ 00:10:03.742 22:21:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:04.000 * Looking for test storage... 00:10:04.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:04.000 22:21:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:04.000 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:04.000 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.000 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.000 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.000 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.000 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.000 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.000 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:04.001 Cannot find device "nvmf_tgt_br" 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:04.001 Cannot find device "nvmf_tgt_br2" 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:04.001 Cannot find device "nvmf_tgt_br" 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:04.001 Cannot find device "nvmf_tgt_br2" 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:10:04.001 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:04.260 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:04.260 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:04.260 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:04.260 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:04.260 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:04.260 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:04.260 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:04.260 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:04.260 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:04.260 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:04.260 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:04.260 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:04.260 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:04.260 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:04.260 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:04.260 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:04.260 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:04.260 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:04.260 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:04.260 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:04.260 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:04.260 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:04.260 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:04.260 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:04.260 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:04.519 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:04.519 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:04.519 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:04.519 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:04.519 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:04.519 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:04.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:04.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:10:04.519 00:10:04.519 --- 10.0.0.2 ping statistics --- 00:10:04.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.519 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:10:04.519 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:04.519 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:04.519 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:10:04.519 00:10:04.519 --- 10.0.0.3 ping statistics --- 00:10:04.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.519 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:10:04.519 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:04.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:04.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:10:04.519 00:10:04.519 --- 10.0.0.1 ping statistics --- 00:10:04.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.519 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:04.519 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:04.519 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:10:04.519 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:04.519 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:04.519 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:04.519 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:04.519 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:04.519 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:04.519 22:21:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:04.519 22:21:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:04.519 22:21:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:04.519 22:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:04.519 22:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.519 22:21:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=68605 00:10:04.519 22:21:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:04.519 22:21:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 68605 00:10:04.519 22:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 68605 ']' 00:10:04.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.519 22:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.519 22:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:04.519 22:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.519 22:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:04.519 22:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.519 [2024-07-15 22:21:18.063543] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:10:04.519 [2024-07-15 22:21:18.063635] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.777 [2024-07-15 22:21:18.208726] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:04.777 [2024-07-15 22:21:18.308583] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.777 [2024-07-15 22:21:18.308632] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.777 [2024-07-15 22:21:18.308642] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.777 [2024-07-15 22:21:18.308651] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.777 [2024-07-15 22:21:18.308657] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.777 [2024-07-15 22:21:18.308765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.777 [2024-07-15 22:21:18.309001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:04.777 [2024-07-15 22:21:18.309966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:04.777 [2024-07-15 22:21:18.309967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.777 [2024-07-15 22:21:18.352924] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:05.343 22:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:05.343 22:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:10:05.343 22:21:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:05.343 22:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:05.343 22:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.600 22:21:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:05.600 22:21:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:05.600 [2024-07-15 22:21:19.182636] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:05.600 22:21:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:05.858 22:21:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:05.858 22:21:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:06.115 22:21:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:06.116 22:21:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:06.374 22:21:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:06.374 22:21:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:06.632 22:21:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:06.632 22:21:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:06.889 22:21:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:07.148 22:21:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:07.148 22:21:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:07.406 22:21:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:07.406 22:21:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:07.664 22:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:07.664 22:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:07.664 22:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:07.921 22:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:07.921 22:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:08.179 22:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:08.179 22:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:08.436 22:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.694 [2024-07-15 22:21:22.092531] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.694 22:21:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:08.953 22:21:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:08.953 22:21:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid=37374fe9-a847-4b40-94af-b766955abedc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:09.209 22:21:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:09.209 22:21:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:09.209 22:21:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:09.209 22:21:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:09.209 22:21:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:09.209 22:21:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:11.177 22:21:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:11.177 22:21:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:11.177 22:21:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:11.177 22:21:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:11.177 22:21:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:11.177 22:21:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:11.177 22:21:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:11.177 [global] 00:10:11.177 thread=1 00:10:11.177 invalidate=1 00:10:11.177 rw=write 00:10:11.177 time_based=1 00:10:11.177 runtime=1 00:10:11.177 ioengine=libaio 00:10:11.177 direct=1 00:10:11.177 bs=4096 00:10:11.178 iodepth=1 00:10:11.178 norandommap=0 00:10:11.178 numjobs=1 00:10:11.178 00:10:11.178 verify_dump=1 00:10:11.178 verify_backlog=512 00:10:11.178 verify_state_save=0 00:10:11.178 do_verify=1 00:10:11.178 verify=crc32c-intel 00:10:11.178 [job0] 00:10:11.178 filename=/dev/nvme0n1 00:10:11.178 [job1] 00:10:11.178 filename=/dev/nvme0n2 00:10:11.178 [job2] 00:10:11.178 filename=/dev/nvme0n3 00:10:11.178 [job3] 00:10:11.178 filename=/dev/nvme0n4 00:10:11.178 Could not set queue depth (nvme0n1) 00:10:11.178 Could not set queue depth (nvme0n2) 00:10:11.178 Could not set queue depth (nvme0n3) 00:10:11.178 Could not set queue depth (nvme0n4) 00:10:11.433 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:11.433 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:11.434 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:11.434 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:11.434 fio-3.35 00:10:11.434 Starting 4 threads 00:10:12.801 00:10:12.801 job0: (groupid=0, jobs=1): err= 0: pid=68785: Mon Jul 15 22:21:26 2024 00:10:12.801 read: IOPS=2791, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1001msec) 00:10:12.801 slat (nsec): min=6306, max=28629, avg=9232.13, stdev=2014.87 00:10:12.801 clat (usec): min=118, max=453, avg=188.48, stdev=60.38 00:10:12.801 lat (usec): min=126, max=462, avg=197.71, stdev=60.34 00:10:12.801 clat percentiles (usec): 00:10:12.801 | 1.00th=[ 124], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 141], 00:10:12.801 | 30.00th=[ 147], 40.00th=[ 153], 50.00th=[ 163], 60.00th=[ 182], 00:10:12.801 | 70.00th=[ 215], 80.00th=[ 233], 90.00th=[ 293], 95.00th=[ 318], 00:10:12.801 | 99.00th=[ 351], 99.50th=[ 371], 99.90th=[ 392], 99.95th=[ 437], 00:10:12.801 | 99.99th=[ 453] 00:10:12.801 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:12.801 slat (nsec): min=7466, max=88879, avg=14285.58, stdev=3412.97 00:10:12.801 clat (usec): min=69, max=895, avg=129.41, stdev=34.30 00:10:12.801 lat (usec): min=82, max=920, avg=143.70, stdev=34.43 00:10:12.801 clat percentiles (usec): 00:10:12.801 | 1.00th=[ 84], 5.00th=[ 91], 10.00th=[ 96], 20.00th=[ 103], 00:10:12.801 | 30.00th=[ 110], 40.00th=[ 117], 50.00th=[ 123], 60.00th=[ 131], 00:10:12.801 | 70.00th=[ 143], 80.00th=[ 153], 90.00th=[ 169], 95.00th=[ 188], 00:10:12.801 | 99.00th=[ 225], 99.50th=[ 239], 99.90th=[ 285], 99.95th=[ 445], 00:10:12.801 | 99.99th=[ 898] 00:10:12.801 bw ( KiB/s): min=13280, max=13280, per=29.67%, avg=13280.00, stdev= 0.00, samples=1 00:10:12.801 iops : min= 3320, max= 3320, avg=3320.00, stdev= 0.00, samples=1 00:10:12.801 lat (usec) : 100=8.57%, 250=84.27%, 500=7.14%, 1000=0.02% 00:10:12.801 cpu : usr=1.10%, sys=6.10%, ctx=5866, majf=0, minf=9 00:10:12.801 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.801 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.801 issued rwts: total=2794,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.801 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.801 job1: (groupid=0, jobs=1): err= 0: pid=68786: Mon Jul 15 22:21:26 2024 00:10:12.801 read: IOPS=3142, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1001msec) 00:10:12.801 slat (nsec): min=7580, max=55242, avg=9617.92, stdev=1930.66 00:10:12.801 clat (usec): min=115, max=574, avg=148.16, stdev=20.46 00:10:12.801 lat (usec): min=124, max=590, avg=157.78, stdev=20.75 00:10:12.801 clat percentiles (usec): 00:10:12.801 | 1.00th=[ 127], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 137], 00:10:12.801 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:10:12.801 | 70.00th=[ 151], 80.00th=[ 157], 90.00th=[ 165], 95.00th=[ 178], 00:10:12.801 | 99.00th=[ 202], 99.50th=[ 212], 99.90th=[ 383], 99.95th=[ 562], 00:10:12.801 | 99.99th=[ 578] 00:10:12.801 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:10:12.801 slat (nsec): min=9635, max=80177, avg=15093.87, stdev=3995.11 00:10:12.801 clat (usec): min=72, max=1935, avg=123.30, stdev=82.42 00:10:12.801 lat (usec): min=85, max=1950, avg=138.39, stdev=83.95 00:10:12.801 clat percentiles (usec): 00:10:12.801 | 1.00th=[ 83], 5.00th=[ 87], 10.00th=[ 90], 20.00th=[ 93], 00:10:12.801 | 30.00th=[ 95], 40.00th=[ 98], 50.00th=[ 101], 60.00th=[ 104], 00:10:12.801 | 70.00th=[ 110], 80.00th=[ 119], 90.00th=[ 149], 95.00th=[ 330], 00:10:12.801 | 99.00th=[ 396], 99.50th=[ 408], 99.90th=[ 1106], 99.95th=[ 1631], 00:10:12.801 | 99.99th=[ 1942] 00:10:12.801 bw ( KiB/s): min=12288, max=12288, per=27.46%, avg=12288.00, stdev= 0.00, samples=1 00:10:12.801 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:12.801 lat (usec) : 100=25.69%, 250=70.33%, 500=3.86%, 750=0.06% 00:10:12.801 lat (msec) : 2=0.06% 00:10:12.801 cpu : usr=1.60%, sys=6.90%, ctx=6733, majf=0, minf=15 00:10:12.801 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.801 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.801 issued rwts: total=3146,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.801 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.801 job2: (groupid=0, jobs=1): err= 0: pid=68787: Mon Jul 15 22:21:26 2024 00:10:12.801 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:12.801 slat (nsec): min=6279, max=93111, avg=12014.10, stdev=4875.37 00:10:12.801 clat (usec): min=132, max=468, avg=255.97, stdev=40.73 00:10:12.801 lat (usec): min=148, max=478, avg=267.99, stdev=40.72 00:10:12.801 clat percentiles (usec): 00:10:12.801 | 1.00th=[ 169], 5.00th=[ 204], 10.00th=[ 215], 20.00th=[ 225], 00:10:12.801 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 258], 00:10:12.801 | 70.00th=[ 273], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 330], 00:10:12.801 | 99.00th=[ 371], 99.50th=[ 404], 99.90th=[ 437], 99.95th=[ 445], 00:10:12.801 | 99.99th=[ 469] 00:10:12.801 write: IOPS=2078, BW=8316KiB/s (8515kB/s)(8324KiB/1001msec); 0 zone resets 00:10:12.801 slat (nsec): min=8190, max=76821, avg=17965.99, stdev=6297.95 00:10:12.801 clat (usec): min=81, max=6960, avg=196.04, stdev=214.04 00:10:12.801 lat (usec): min=103, max=6978, avg=214.00, stdev=215.12 00:10:12.801 clat percentiles (usec): 00:10:12.801 | 1.00th=[ 99], 5.00th=[ 114], 10.00th=[ 126], 20.00th=[ 141], 00:10:12.801 | 30.00th=[ 155], 40.00th=[ 165], 50.00th=[ 174], 60.00th=[ 180], 00:10:12.801 | 70.00th=[ 188], 80.00th=[ 200], 90.00th=[ 277], 95.00th=[ 367], 00:10:12.801 | 99.00th=[ 408], 99.50th=[ 441], 99.90th=[ 3359], 99.95th=[ 3982], 00:10:12.801 | 99.99th=[ 6980] 00:10:12.801 bw ( KiB/s): min= 9304, max= 9304, per=20.79%, avg=9304.00, stdev= 0.00, samples=1 00:10:12.801 iops : min= 2326, max= 2326, avg=2326.00, stdev= 0.00, samples=1 00:10:12.801 lat (usec) : 100=0.65%, 250=70.48%, 500=28.65%, 1000=0.02% 00:10:12.801 lat (msec) : 2=0.07%, 4=0.10%, 10=0.02% 00:10:12.801 cpu : usr=1.10%, sys=5.40%, ctx=4135, majf=0, minf=11 00:10:12.801 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.801 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.801 issued rwts: total=2048,2081,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.801 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.801 job3: (groupid=0, jobs=1): err= 0: pid=68788: Mon Jul 15 22:21:26 2024 00:10:12.801 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:12.801 slat (nsec): min=7902, max=51433, avg=12643.50, stdev=6088.72 00:10:12.801 clat (usec): min=129, max=2711, avg=230.67, stdev=135.75 00:10:12.801 lat (usec): min=137, max=2738, avg=243.31, stdev=138.46 00:10:12.801 clat percentiles (usec): 00:10:12.801 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:10:12.801 | 30.00th=[ 161], 40.00th=[ 172], 50.00th=[ 223], 60.00th=[ 241], 00:10:12.801 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 306], 95.00th=[ 404], 00:10:12.801 | 99.00th=[ 668], 99.50th=[ 742], 99.90th=[ 2245], 99.95th=[ 2376], 00:10:12.801 | 99.99th=[ 2704] 00:10:12.801 write: IOPS=2460, BW=9842KiB/s (10.1MB/s)(9852KiB/1001msec); 0 zone resets 00:10:12.801 slat (usec): min=12, max=143, avg=20.00, stdev= 9.42 00:10:12.801 clat (usec): min=89, max=1542, avg=180.93, stdev=89.14 00:10:12.801 lat (usec): min=102, max=1629, avg=200.93, stdev=93.43 00:10:12.801 clat percentiles (usec): 00:10:12.801 | 1.00th=[ 96], 5.00th=[ 102], 10.00th=[ 106], 20.00th=[ 115], 00:10:12.801 | 30.00th=[ 124], 40.00th=[ 155], 50.00th=[ 178], 60.00th=[ 186], 00:10:12.801 | 70.00th=[ 196], 80.00th=[ 212], 90.00th=[ 281], 95.00th=[ 338], 00:10:12.801 | 99.00th=[ 388], 99.50th=[ 506], 99.90th=[ 1319], 99.95th=[ 1516], 00:10:12.801 | 99.99th=[ 1549] 00:10:12.801 bw ( KiB/s): min= 8192, max= 8192, per=18.30%, avg=8192.00, stdev= 0.00, samples=1 00:10:12.801 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:12.801 lat (usec) : 100=1.84%, 250=76.88%, 500=19.77%, 750=1.17%, 1000=0.13% 00:10:12.801 lat (msec) : 2=0.11%, 4=0.09% 00:10:12.801 cpu : usr=1.30%, sys=6.30%, ctx=4511, majf=0, minf=6 00:10:12.801 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.801 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.801 issued rwts: total=2048,2463,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.801 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.801 00:10:12.801 Run status group 0 (all jobs): 00:10:12.802 READ: bw=39.2MiB/s (41.1MB/s), 8184KiB/s-12.3MiB/s (8380kB/s-12.9MB/s), io=39.2MiB (41.1MB), run=1001-1001msec 00:10:12.802 WRITE: bw=43.7MiB/s (45.8MB/s), 8316KiB/s-14.0MiB/s (8515kB/s-14.7MB/s), io=43.8MiB (45.9MB), run=1001-1001msec 00:10:12.802 00:10:12.802 Disk stats (read/write): 00:10:12.802 nvme0n1: ios=2522/2560, merge=0/0, ticks=513/324, in_queue=837, util=88.08% 00:10:12.802 nvme0n2: ios=2575/2992, merge=0/0, ticks=398/397, in_queue=795, util=85.76% 00:10:12.802 nvme0n3: ios=1536/1946, merge=0/0, ticks=401/389, in_queue=790, util=88.27% 00:10:12.802 nvme0n4: ios=1536/1869, merge=0/0, ticks=390/397, in_queue=787, util=89.23% 00:10:12.802 22:21:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:12.802 [global] 00:10:12.802 thread=1 00:10:12.802 invalidate=1 00:10:12.802 rw=randwrite 00:10:12.802 time_based=1 00:10:12.802 runtime=1 00:10:12.802 ioengine=libaio 00:10:12.802 direct=1 00:10:12.802 bs=4096 00:10:12.802 iodepth=1 00:10:12.802 norandommap=0 00:10:12.802 numjobs=1 00:10:12.802 00:10:12.802 verify_dump=1 00:10:12.802 verify_backlog=512 00:10:12.802 verify_state_save=0 00:10:12.802 do_verify=1 00:10:12.802 verify=crc32c-intel 00:10:12.802 [job0] 00:10:12.802 filename=/dev/nvme0n1 00:10:12.802 [job1] 00:10:12.802 filename=/dev/nvme0n2 00:10:12.802 [job2] 00:10:12.802 filename=/dev/nvme0n3 00:10:12.802 [job3] 00:10:12.802 filename=/dev/nvme0n4 00:10:12.802 Could not set queue depth (nvme0n1) 00:10:12.802 Could not set queue depth (nvme0n2) 00:10:12.802 Could not set queue depth (nvme0n3) 00:10:12.802 Could not set queue depth (nvme0n4) 00:10:12.802 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.802 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.802 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.802 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.802 fio-3.35 00:10:12.802 Starting 4 threads 00:10:14.188 00:10:14.188 job0: (groupid=0, jobs=1): err= 0: pid=68841: Mon Jul 15 22:21:27 2024 00:10:14.188 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:14.188 slat (nsec): min=7416, max=31325, avg=8738.60, stdev=1846.27 00:10:14.188 clat (usec): min=124, max=1986, avg=244.60, stdev=48.15 00:10:14.188 lat (usec): min=134, max=1994, avg=253.34, stdev=48.25 00:10:14.188 clat percentiles (usec): 00:10:14.188 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 227], 00:10:14.188 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:10:14.188 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 293], 00:10:14.188 | 99.00th=[ 330], 99.50th=[ 351], 99.90th=[ 660], 99.95th=[ 717], 00:10:14.188 | 99.99th=[ 1991] 00:10:14.188 write: IOPS=2442, BW=9770KiB/s (10.0MB/s)(9780KiB/1001msec); 0 zone resets 00:10:14.188 slat (usec): min=11, max=135, avg=15.18, stdev= 6.38 00:10:14.188 clat (usec): min=74, max=7071, avg=179.56, stdev=195.91 00:10:14.188 lat (usec): min=89, max=7086, avg=194.74, stdev=196.20 00:10:14.188 clat percentiles (usec): 00:10:14.188 | 1.00th=[ 88], 5.00th=[ 97], 10.00th=[ 110], 20.00th=[ 161], 00:10:14.188 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:10:14.188 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 202], 95.00th=[ 208], 00:10:14.188 | 99.00th=[ 235], 99.50th=[ 253], 99.90th=[ 3392], 99.95th=[ 3425], 00:10:14.188 | 99.99th=[ 7046] 00:10:14.188 bw ( KiB/s): min= 9696, max= 9696, per=24.34%, avg=9696.00, stdev= 0.00, samples=1 00:10:14.188 iops : min= 2424, max= 2424, avg=2424.00, stdev= 0.00, samples=1 00:10:14.188 lat (usec) : 100=3.41%, 250=83.46%, 500=12.84%, 750=0.09%, 1000=0.02% 00:10:14.188 lat (msec) : 2=0.04%, 4=0.11%, 10=0.02% 00:10:14.188 cpu : usr=0.80%, sys=4.80%, ctx=4495, majf=0, minf=9 00:10:14.188 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:14.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.188 issued rwts: total=2048,2445,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.188 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:14.188 job1: (groupid=0, jobs=1): err= 0: pid=68842: Mon Jul 15 22:21:27 2024 00:10:14.188 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:14.188 slat (nsec): min=5964, max=31510, avg=7962.40, stdev=2317.39 00:10:14.188 clat (usec): min=161, max=682, avg=231.41, stdev=54.04 00:10:14.188 lat (usec): min=169, max=688, avg=239.38, stdev=54.48 00:10:14.188 clat percentiles (usec): 00:10:14.188 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:10:14.188 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 225], 00:10:14.188 | 70.00th=[ 231], 80.00th=[ 241], 90.00th=[ 265], 95.00th=[ 289], 00:10:14.188 | 99.00th=[ 578], 99.50th=[ 611], 99.90th=[ 644], 99.95th=[ 652], 00:10:14.188 | 99.99th=[ 685] 00:10:14.188 write: IOPS=2555, BW=9.98MiB/s (10.5MB/s)(9.99MiB/1001msec); 0 zone resets 00:10:14.188 slat (usec): min=7, max=120, avg=12.80, stdev= 5.07 00:10:14.188 clat (usec): min=100, max=418, avg=184.88, stdev=24.69 00:10:14.188 lat (usec): min=123, max=429, avg=197.68, stdev=26.32 00:10:14.188 clat percentiles (usec): 00:10:14.188 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 165], 00:10:14.188 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 186], 00:10:14.188 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 219], 95.00th=[ 233], 00:10:14.188 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 302], 99.95th=[ 326], 00:10:14.188 | 99.99th=[ 420] 00:10:14.188 bw ( KiB/s): min= 9160, max= 9160, per=23.00%, avg=9160.00, stdev= 0.00, samples=1 00:10:14.188 iops : min= 2290, max= 2290, avg=2290.00, stdev= 0.00, samples=1 00:10:14.188 lat (usec) : 250=92.42%, 500=6.95%, 750=0.63% 00:10:14.188 cpu : usr=1.40%, sys=3.80%, ctx=4606, majf=0, minf=17 00:10:14.188 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:14.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.188 issued rwts: total=2048,2558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.188 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:14.188 job2: (groupid=0, jobs=1): err= 0: pid=68843: Mon Jul 15 22:21:27 2024 00:10:14.188 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:14.188 slat (usec): min=8, max=228, avg= 9.65, stdev= 5.27 00:10:14.188 clat (usec): min=130, max=346, avg=239.55, stdev=27.26 00:10:14.188 lat (usec): min=139, max=457, avg=249.20, stdev=27.67 00:10:14.188 clat percentiles (usec): 00:10:14.188 | 1.00th=[ 145], 5.00th=[ 206], 10.00th=[ 217], 20.00th=[ 225], 00:10:14.188 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 243], 00:10:14.188 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 285], 00:10:14.188 | 99.00th=[ 322], 99.50th=[ 330], 99.90th=[ 343], 99.95th=[ 347], 00:10:14.188 | 99.99th=[ 347] 00:10:14.188 write: IOPS=2406, BW=9626KiB/s (9857kB/s)(9636KiB/1001msec); 0 zone resets 00:10:14.188 slat (usec): min=12, max=123, avg=18.76, stdev=10.96 00:10:14.188 clat (usec): min=93, max=2624, avg=182.19, stdev=60.84 00:10:14.188 lat (usec): min=106, max=2651, avg=200.94, stdev=64.63 00:10:14.188 clat percentiles (usec): 00:10:14.188 | 1.00th=[ 99], 5.00th=[ 112], 10.00th=[ 151], 20.00th=[ 165], 00:10:14.188 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:10:14.188 | 70.00th=[ 190], 80.00th=[ 200], 90.00th=[ 221], 95.00th=[ 253], 00:10:14.188 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 322], 99.95th=[ 326], 00:10:14.188 | 99.99th=[ 2638] 00:10:14.188 bw ( KiB/s): min= 9488, max= 9488, per=23.82%, avg=9488.00, stdev= 0.00, samples=1 00:10:14.188 iops : min= 2372, max= 2372, avg=2372.00, stdev= 0.00, samples=1 00:10:14.188 lat (usec) : 100=0.85%, 250=84.32%, 500=14.81% 00:10:14.188 lat (msec) : 4=0.02% 00:10:14.188 cpu : usr=1.40%, sys=5.10%, ctx=4459, majf=0, minf=8 00:10:14.188 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:14.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.188 issued rwts: total=2048,2409,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.188 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:14.188 job3: (groupid=0, jobs=1): err= 0: pid=68844: Mon Jul 15 22:21:27 2024 00:10:14.188 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:14.188 slat (nsec): min=6148, max=32988, avg=9121.34, stdev=2957.22 00:10:14.188 clat (usec): min=178, max=677, avg=230.30, stdev=53.29 00:10:14.188 lat (usec): min=188, max=686, avg=239.42, stdev=54.30 00:10:14.188 clat percentiles (usec): 00:10:14.188 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:10:14.188 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 225], 00:10:14.188 | 70.00th=[ 231], 80.00th=[ 241], 90.00th=[ 260], 95.00th=[ 285], 00:10:14.188 | 99.00th=[ 578], 99.50th=[ 603], 99.90th=[ 635], 99.95th=[ 635], 00:10:14.188 | 99.99th=[ 676] 00:10:14.188 write: IOPS=2553, BW=9.97MiB/s (10.5MB/s)(9.98MiB/1001msec); 0 zone resets 00:10:14.188 slat (nsec): min=7869, max=71863, avg=14353.82, stdev=5322.88 00:10:14.188 clat (usec): min=112, max=473, avg=183.17, stdev=24.39 00:10:14.188 lat (usec): min=150, max=488, avg=197.52, stdev=26.24 00:10:14.188 clat percentiles (usec): 00:10:14.188 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 163], 00:10:14.188 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 184], 00:10:14.188 | 70.00th=[ 192], 80.00th=[ 200], 90.00th=[ 217], 95.00th=[ 229], 00:10:14.188 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[ 293], 99.95th=[ 326], 00:10:14.188 | 99.99th=[ 474] 00:10:14.188 bw ( KiB/s): min= 9152, max= 9152, per=22.98%, avg=9152.00, stdev= 0.00, samples=1 00:10:14.188 iops : min= 2288, max= 2288, avg=2288.00, stdev= 0.00, samples=1 00:10:14.188 lat (usec) : 250=92.79%, 500=6.58%, 750=0.63% 00:10:14.188 cpu : usr=1.30%, sys=4.90%, ctx=4604, majf=0, minf=11 00:10:14.188 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:14.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.188 issued rwts: total=2048,2556,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.188 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:14.188 00:10:14.188 Run status group 0 (all jobs): 00:10:14.188 READ: bw=32.0MiB/s (33.5MB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:10:14.188 WRITE: bw=38.9MiB/s (40.8MB/s), 9626KiB/s-9.98MiB/s (9857kB/s-10.5MB/s), io=38.9MiB (40.8MB), run=1001-1001msec 00:10:14.188 00:10:14.188 Disk stats (read/write): 00:10:14.188 nvme0n1: ios=1880/2048, merge=0/0, ticks=461/365, in_queue=826, util=87.78% 00:10:14.188 nvme0n2: ios=1937/2048, merge=0/0, ticks=435/356, in_queue=791, util=89.20% 00:10:14.188 nvme0n3: ios=1813/2048, merge=0/0, ticks=437/385, in_queue=822, util=89.03% 00:10:14.188 nvme0n4: ios=1886/2048, merge=0/0, ticks=430/372, in_queue=802, util=89.76% 00:10:14.188 22:21:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:14.188 [global] 00:10:14.188 thread=1 00:10:14.188 invalidate=1 00:10:14.188 rw=write 00:10:14.188 time_based=1 00:10:14.188 runtime=1 00:10:14.188 ioengine=libaio 00:10:14.188 direct=1 00:10:14.188 bs=4096 00:10:14.188 iodepth=128 00:10:14.188 norandommap=0 00:10:14.188 numjobs=1 00:10:14.188 00:10:14.188 verify_dump=1 00:10:14.188 verify_backlog=512 00:10:14.188 verify_state_save=0 00:10:14.188 do_verify=1 00:10:14.188 verify=crc32c-intel 00:10:14.188 [job0] 00:10:14.188 filename=/dev/nvme0n1 00:10:14.188 [job1] 00:10:14.188 filename=/dev/nvme0n2 00:10:14.188 [job2] 00:10:14.188 filename=/dev/nvme0n3 00:10:14.188 [job3] 00:10:14.188 filename=/dev/nvme0n4 00:10:14.188 Could not set queue depth (nvme0n1) 00:10:14.188 Could not set queue depth (nvme0n2) 00:10:14.188 Could not set queue depth (nvme0n3) 00:10:14.188 Could not set queue depth (nvme0n4) 00:10:14.188 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.188 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.189 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.189 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.189 fio-3.35 00:10:14.189 Starting 4 threads 00:10:15.559 00:10:15.559 job0: (groupid=0, jobs=1): err= 0: pid=68903: Mon Jul 15 22:21:28 2024 00:10:15.559 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:10:15.559 slat (usec): min=12, max=3375, avg=129.29, stdev=485.53 00:10:15.559 clat (usec): min=11756, max=20446, avg=17070.23, stdev=1287.14 00:10:15.559 lat (usec): min=13926, max=20479, avg=17199.53, stdev=1205.24 00:10:15.559 clat percentiles (usec): 00:10:15.559 | 1.00th=[14091], 5.00th=[14615], 10.00th=[15008], 20.00th=[16450], 00:10:15.559 | 30.00th=[16712], 40.00th=[16909], 50.00th=[17171], 60.00th=[17171], 00:10:15.559 | 70.00th=[17433], 80.00th=[17695], 90.00th=[19006], 95.00th=[19530], 00:10:15.559 | 99.00th=[20055], 99.50th=[20317], 99.90th=[20317], 99.95th=[20317], 00:10:15.559 | 99.99th=[20317] 00:10:15.559 write: IOPS=4028, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1003msec); 0 zone resets 00:10:15.559 slat (usec): min=23, max=7425, avg=121.42, stdev=487.19 00:10:15.559 clat (usec): min=2460, max=23427, avg=16237.23, stdev=2293.48 00:10:15.559 lat (usec): min=5175, max=23460, avg=16358.65, stdev=2255.42 00:10:15.559 clat percentiles (usec): 00:10:15.559 | 1.00th=[ 7701], 5.00th=[12518], 10.00th=[13960], 20.00th=[15664], 00:10:15.559 | 30.00th=[15926], 40.00th=[16188], 50.00th=[16319], 60.00th=[16450], 00:10:15.559 | 70.00th=[16909], 80.00th=[17433], 90.00th=[18744], 95.00th=[19530], 00:10:15.559 | 99.00th=[22676], 99.50th=[23200], 99.90th=[23462], 99.95th=[23462], 00:10:15.559 | 99.99th=[23462] 00:10:15.559 bw ( KiB/s): min=14928, max=16384, per=26.01%, avg=15656.00, stdev=1029.55, samples=2 00:10:15.560 iops : min= 3732, max= 4096, avg=3914.00, stdev=257.39, samples=2 00:10:15.560 lat (msec) : 4=0.01%, 10=1.43%, 20=96.31%, 50=2.24% 00:10:15.560 cpu : usr=5.29%, sys=14.97%, ctx=356, majf=0, minf=9 00:10:15.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:15.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.560 issued rwts: total=3584,4041,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.560 job1: (groupid=0, jobs=1): err= 0: pid=68904: Mon Jul 15 22:21:28 2024 00:10:15.560 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:10:15.560 slat (usec): min=7, max=5378, avg=125.85, stdev=593.88 00:10:15.560 clat (usec): min=11318, max=18863, avg=17146.10, stdev=902.75 00:10:15.560 lat (usec): min=14807, max=18883, avg=17271.95, stdev=686.19 00:10:15.560 clat percentiles (usec): 00:10:15.560 | 1.00th=[13566], 5.00th=[15008], 10.00th=[16450], 20.00th=[16712], 00:10:15.560 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17171], 60.00th=[17433], 00:10:15.560 | 70.00th=[17695], 80.00th=[17695], 90.00th=[17957], 95.00th=[18220], 00:10:15.560 | 99.00th=[18744], 99.50th=[18744], 99.90th=[18744], 99.95th=[18744], 00:10:15.560 | 99.99th=[18744] 00:10:15.560 write: IOPS=4024, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1002msec); 0 zone resets 00:10:15.560 slat (usec): min=10, max=11732, avg=125.56, stdev=546.70 00:10:15.560 clat (usec): min=266, max=24647, avg=16074.36, stdev=2254.25 00:10:15.560 lat (usec): min=2150, max=24662, avg=16199.92, stdev=2195.49 00:10:15.560 clat percentiles (usec): 00:10:15.560 | 1.00th=[ 7898], 5.00th=[13173], 10.00th=[14746], 20.00th=[15401], 00:10:15.560 | 30.00th=[15664], 40.00th=[15926], 50.00th=[16188], 60.00th=[16450], 00:10:15.560 | 70.00th=[16581], 80.00th=[16909], 90.00th=[17433], 95.00th=[18482], 00:10:15.560 | 99.00th=[24249], 99.50th=[24249], 99.90th=[24511], 99.95th=[24511], 00:10:15.560 | 99.99th=[24773] 00:10:15.560 bw ( KiB/s): min=14856, max=16416, per=25.98%, avg=15636.00, stdev=1103.09, samples=2 00:10:15.560 iops : min= 3714, max= 4104, avg=3909.00, stdev=275.77, samples=2 00:10:15.560 lat (usec) : 500=0.01% 00:10:15.560 lat (msec) : 4=0.42%, 10=0.45%, 20=97.47%, 50=1.65% 00:10:15.560 cpu : usr=5.39%, sys=14.19%, ctx=270, majf=0, minf=17 00:10:15.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:15.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.560 issued rwts: total=3584,4033,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.560 job2: (groupid=0, jobs=1): err= 0: pid=68906: Mon Jul 15 22:21:28 2024 00:10:15.560 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:10:15.560 slat (usec): min=16, max=5536, avg=148.72, stdev=667.22 00:10:15.560 clat (usec): min=12869, max=24666, avg=19517.44, stdev=1365.26 00:10:15.560 lat (usec): min=12897, max=25841, avg=19666.16, stdev=1248.46 00:10:15.560 clat percentiles (usec): 00:10:15.560 | 1.00th=[15139], 5.00th=[16909], 10.00th=[18220], 20.00th=[18744], 00:10:15.560 | 30.00th=[19268], 40.00th=[19530], 50.00th=[19530], 60.00th=[19792], 00:10:15.560 | 70.00th=[19792], 80.00th=[20317], 90.00th=[20841], 95.00th=[21627], 00:10:15.560 | 99.00th=[23200], 99.50th=[23987], 99.90th=[24773], 99.95th=[24773], 00:10:15.560 | 99.99th=[24773] 00:10:15.560 write: IOPS=3518, BW=13.7MiB/s (14.4MB/s)(13.8MiB/1004msec); 0 zone resets 00:10:15.560 slat (usec): min=20, max=5336, avg=143.59, stdev=591.38 00:10:15.560 clat (usec): min=331, max=25009, avg=18880.35, stdev=2206.19 00:10:15.560 lat (usec): min=3911, max=25040, avg=19023.93, stdev=2139.77 00:10:15.560 clat percentiles (usec): 00:10:15.560 | 1.00th=[ 9372], 5.00th=[15926], 10.00th=[17433], 20.00th=[18220], 00:10:15.560 | 30.00th=[18482], 40.00th=[19006], 50.00th=[19268], 60.00th=[19530], 00:10:15.560 | 70.00th=[19792], 80.00th=[20055], 90.00th=[20579], 95.00th=[20841], 00:10:15.560 | 99.00th=[23200], 99.50th=[24249], 99.90th=[24773], 99.95th=[24773], 00:10:15.560 | 99.99th=[25035] 00:10:15.560 bw ( KiB/s): min=13563, max=13704, per=22.65%, avg=13633.50, stdev=99.70, samples=2 00:10:15.560 iops : min= 3390, max= 3426, avg=3408.00, stdev=25.46, samples=2 00:10:15.560 lat (usec) : 500=0.02% 00:10:15.560 lat (msec) : 4=0.05%, 10=0.71%, 20=76.00%, 50=23.22% 00:10:15.560 cpu : usr=4.69%, sys=13.36%, ctx=465, majf=0, minf=7 00:10:15.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:15.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.560 issued rwts: total=3072,3533,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.560 job3: (groupid=0, jobs=1): err= 0: pid=68910: Mon Jul 15 22:21:28 2024 00:10:15.560 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:15.560 slat (usec): min=18, max=5842, avg=146.51, stdev=666.36 00:10:15.560 clat (usec): min=14031, max=27431, avg=19620.01, stdev=1280.35 00:10:15.560 lat (usec): min=15048, max=27463, avg=19766.52, stdev=1157.92 00:10:15.560 clat percentiles (usec): 00:10:15.560 | 1.00th=[15533], 5.00th=[17957], 10.00th=[18482], 20.00th=[19006], 00:10:15.560 | 30.00th=[19268], 40.00th=[19530], 50.00th=[19530], 60.00th=[19792], 00:10:15.560 | 70.00th=[19792], 80.00th=[20055], 90.00th=[20841], 95.00th=[21890], 00:10:15.560 | 99.00th=[23725], 99.50th=[23725], 99.90th=[26084], 99.95th=[27395], 00:10:15.560 | 99.99th=[27395] 00:10:15.560 write: IOPS=3497, BW=13.7MiB/s (14.3MB/s)(13.7MiB/1001msec); 0 zone resets 00:10:15.560 slat (usec): min=10, max=5405, avg=145.59, stdev=591.24 00:10:15.560 clat (usec): min=578, max=23222, avg=18807.35, stdev=2372.37 00:10:15.560 lat (usec): min=611, max=23834, avg=18952.94, stdev=2312.16 00:10:15.560 clat percentiles (usec): 00:10:15.560 | 1.00th=[ 5932], 5.00th=[15401], 10.00th=[17433], 20.00th=[18220], 00:10:15.560 | 30.00th=[18744], 40.00th=[19006], 50.00th=[19268], 60.00th=[19530], 00:10:15.560 | 70.00th=[19792], 80.00th=[20055], 90.00th=[20579], 95.00th=[20841], 00:10:15.560 | 99.00th=[21890], 99.50th=[22414], 99.90th=[23200], 99.95th=[23200], 00:10:15.560 | 99.99th=[23200] 00:10:15.560 bw ( KiB/s): min=13512, max=13512, per=22.45%, avg=13512.00, stdev= 0.00, samples=1 00:10:15.560 iops : min= 3378, max= 3378, avg=3378.00, stdev= 0.00, samples=1 00:10:15.560 lat (usec) : 750=0.08%, 1000=0.09% 00:10:15.560 lat (msec) : 2=0.15%, 10=0.49%, 20=76.97%, 50=22.23% 00:10:15.560 cpu : usr=4.50%, sys=14.10%, ctx=423, majf=0, minf=17 00:10:15.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:15.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.560 issued rwts: total=3072,3501,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.560 00:10:15.560 Run status group 0 (all jobs): 00:10:15.560 READ: bw=51.8MiB/s (54.3MB/s), 12.0MiB/s-14.0MiB/s (12.5MB/s-14.7MB/s), io=52.0MiB (54.5MB), run=1001-1004msec 00:10:15.560 WRITE: bw=58.8MiB/s (61.6MB/s), 13.7MiB/s-15.7MiB/s (14.3MB/s-16.5MB/s), io=59.0MiB (61.9MB), run=1001-1004msec 00:10:15.560 00:10:15.560 Disk stats (read/write): 00:10:15.561 nvme0n1: ios=3122/3455, merge=0/0, ticks=12647/11584, in_queue=24231, util=87.66% 00:10:15.561 nvme0n2: ios=3121/3520, merge=0/0, ticks=11738/11710, in_queue=23448, util=88.17% 00:10:15.561 nvme0n3: ios=2608/3072, merge=0/0, ticks=10639/10600, in_queue=21239, util=87.40% 00:10:15.561 nvme0n4: ios=2564/3072, merge=0/0, ticks=10385/11397, in_queue=21782, util=87.94% 00:10:15.561 22:21:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:15.561 [global] 00:10:15.561 thread=1 00:10:15.561 invalidate=1 00:10:15.561 rw=randwrite 00:10:15.561 time_based=1 00:10:15.561 runtime=1 00:10:15.561 ioengine=libaio 00:10:15.561 direct=1 00:10:15.561 bs=4096 00:10:15.561 iodepth=128 00:10:15.561 norandommap=0 00:10:15.561 numjobs=1 00:10:15.561 00:10:15.561 verify_dump=1 00:10:15.561 verify_backlog=512 00:10:15.561 verify_state_save=0 00:10:15.561 do_verify=1 00:10:15.561 verify=crc32c-intel 00:10:15.561 [job0] 00:10:15.561 filename=/dev/nvme0n1 00:10:15.561 [job1] 00:10:15.561 filename=/dev/nvme0n2 00:10:15.561 [job2] 00:10:15.561 filename=/dev/nvme0n3 00:10:15.561 [job3] 00:10:15.561 filename=/dev/nvme0n4 00:10:15.561 Could not set queue depth (nvme0n1) 00:10:15.561 Could not set queue depth (nvme0n2) 00:10:15.561 Could not set queue depth (nvme0n3) 00:10:15.561 Could not set queue depth (nvme0n4) 00:10:15.561 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:15.561 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:15.561 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:15.561 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:15.561 fio-3.35 00:10:15.561 Starting 4 threads 00:10:16.966 00:10:16.966 job0: (groupid=0, jobs=1): err= 0: pid=68965: Mon Jul 15 22:21:30 2024 00:10:16.966 read: IOPS=3699, BW=14.5MiB/s (15.2MB/s)(14.5MiB/1003msec) 00:10:16.966 slat (usec): min=16, max=11157, avg=125.11, stdev=619.90 00:10:16.966 clat (usec): min=868, max=31844, avg=16132.49, stdev=5478.08 00:10:16.966 lat (usec): min=3033, max=31887, avg=16257.61, stdev=5524.14 00:10:16.966 clat percentiles (usec): 00:10:16.966 | 1.00th=[ 4080], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[10421], 00:10:16.966 | 30.00th=[10814], 40.00th=[11207], 50.00th=[17957], 60.00th=[20317], 00:10:16.966 | 70.00th=[20841], 80.00th=[21103], 90.00th=[21890], 95.00th=[23462], 00:10:16.966 | 99.00th=[26346], 99.50th=[28443], 99.90th=[29492], 99.95th=[29492], 00:10:16.966 | 99.99th=[31851] 00:10:16.966 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:10:16.966 slat (usec): min=23, max=7962, avg=120.05, stdev=600.09 00:10:16.966 clat (usec): min=7623, max=29611, avg=16284.06, stdev=5210.37 00:10:16.966 lat (usec): min=7656, max=29647, avg=16404.10, stdev=5273.86 00:10:16.966 clat percentiles (usec): 00:10:16.966 | 1.00th=[ 8455], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[ 9896], 00:10:16.966 | 30.00th=[10159], 40.00th=[16057], 50.00th=[19268], 60.00th=[19530], 00:10:16.966 | 70.00th=[20055], 80.00th=[20579], 90.00th=[21365], 95.00th=[21890], 00:10:16.966 | 99.00th=[26608], 99.50th=[27395], 99.90th=[29230], 99.95th=[29230], 00:10:16.966 | 99.99th=[29492] 00:10:16.966 bw ( KiB/s): min=12312, max=20472, per=24.07%, avg=16392.00, stdev=5769.99, samples=2 00:10:16.966 iops : min= 3078, max= 5118, avg=4098.00, stdev=1442.50, samples=2 00:10:16.966 lat (usec) : 1000=0.01% 00:10:16.966 lat (msec) : 4=0.44%, 10=16.86%, 20=46.39%, 50=36.30% 00:10:16.966 cpu : usr=4.79%, sys=15.47%, ctx=327, majf=0, minf=9 00:10:16.966 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:16.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:16.966 issued rwts: total=3711,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:16.966 job1: (groupid=0, jobs=1): err= 0: pid=68966: Mon Jul 15 22:21:30 2024 00:10:16.966 read: IOPS=2974, BW=11.6MiB/s (12.2MB/s)(11.7MiB/1004msec) 00:10:16.966 slat (usec): min=6, max=11422, avg=166.42, stdev=739.22 00:10:16.966 clat (usec): min=3061, max=29580, avg=21072.81, stdev=3110.67 00:10:16.966 lat (usec): min=3082, max=32061, avg=21239.23, stdev=3135.65 00:10:16.966 clat percentiles (usec): 00:10:16.966 | 1.00th=[ 8586], 5.00th=[15664], 10.00th=[17957], 20.00th=[20055], 00:10:16.966 | 30.00th=[20579], 40.00th=[20841], 50.00th=[21103], 60.00th=[21365], 00:10:16.966 | 70.00th=[22414], 80.00th=[23200], 90.00th=[24249], 95.00th=[25297], 00:10:16.966 | 99.00th=[28181], 99.50th=[28443], 99.90th=[29492], 99.95th=[29492], 00:10:16.966 | 99.99th=[29492] 00:10:16.966 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:10:16.966 slat (usec): min=7, max=8103, avg=152.58, stdev=686.45 00:10:16.966 clat (usec): min=9848, max=29575, avg=20738.75, stdev=2625.62 00:10:16.966 lat (usec): min=9880, max=29639, avg=20891.33, stdev=2693.27 00:10:16.966 clat percentiles (usec): 00:10:16.966 | 1.00th=[13698], 5.00th=[17171], 10.00th=[18482], 20.00th=[19268], 00:10:16.966 | 30.00th=[19530], 40.00th=[19530], 50.00th=[20055], 60.00th=[20579], 00:10:16.966 | 70.00th=[21365], 80.00th=[22152], 90.00th=[24511], 95.00th=[26346], 00:10:16.966 | 99.00th=[28443], 99.50th=[28443], 99.90th=[29492], 99.95th=[29492], 00:10:16.966 | 99.99th=[29492] 00:10:16.966 bw ( KiB/s): min=12288, max=12312, per=18.06%, avg=12300.00, stdev=16.97, samples=2 00:10:16.966 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:10:16.966 lat (msec) : 4=0.36%, 10=0.30%, 20=31.84%, 50=67.50% 00:10:16.966 cpu : usr=3.49%, sys=11.67%, ctx=503, majf=0, minf=11 00:10:16.966 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:16.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:16.966 issued rwts: total=2986,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:16.966 job2: (groupid=0, jobs=1): err= 0: pid=68967: Mon Jul 15 22:21:30 2024 00:10:16.966 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:10:16.966 slat (usec): min=5, max=4315, avg=93.53, stdev=383.83 00:10:16.966 clat (usec): min=8390, max=16751, avg=12639.04, stdev=862.86 00:10:16.966 lat (usec): min=8398, max=17942, avg=12732.57, stdev=878.55 00:10:16.966 clat percentiles (usec): 00:10:16.967 | 1.00th=[10421], 5.00th=[11207], 10.00th=[11600], 20.00th=[12125], 00:10:16.967 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:10:16.967 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13566], 95.00th=[13829], 00:10:16.967 | 99.00th=[15270], 99.50th=[16057], 99.90th=[16450], 99.95th=[16712], 00:10:16.967 | 99.99th=[16712] 00:10:16.967 write: IOPS=5256, BW=20.5MiB/s (21.5MB/s)(20.6MiB/1002msec); 0 zone resets 00:10:16.967 slat (usec): min=7, max=5291, avg=87.43, stdev=415.14 00:10:16.967 clat (usec): min=1441, max=16430, avg=11772.71, stdev=1125.96 00:10:16.967 lat (usec): min=1451, max=16490, avg=11860.14, stdev=1185.66 00:10:16.967 clat percentiles (usec): 00:10:16.967 | 1.00th=[ 7373], 5.00th=[10028], 10.00th=[10945], 20.00th=[11469], 00:10:16.967 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11863], 60.00th=[11994], 00:10:16.967 | 70.00th=[11994], 80.00th=[12125], 90.00th=[12387], 95.00th=[13173], 00:10:16.967 | 99.00th=[15008], 99.50th=[15401], 99.90th=[15795], 99.95th=[15926], 00:10:16.967 | 99.99th=[16450] 00:10:16.967 bw ( KiB/s): min=20480, max=20656, per=30.20%, avg=20568.00, stdev=124.45, samples=2 00:10:16.967 iops : min= 5120, max= 5164, avg=5142.00, stdev=31.11, samples=2 00:10:16.967 lat (msec) : 2=0.13%, 10=2.61%, 20=97.26% 00:10:16.967 cpu : usr=6.09%, sys=20.48%, ctx=330, majf=0, minf=6 00:10:16.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:16.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:16.967 issued rwts: total=5120,5267,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:16.967 job3: (groupid=0, jobs=1): err= 0: pid=68968: Mon Jul 15 22:21:30 2024 00:10:16.967 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:10:16.967 slat (usec): min=9, max=7224, avg=109.13, stdev=413.82 00:10:16.967 clat (usec): min=9285, max=33269, avg=14509.18, stdev=4724.88 00:10:16.967 lat (usec): min=9315, max=33300, avg=14618.31, stdev=4766.20 00:10:16.967 clat percentiles (usec): 00:10:16.967 | 1.00th=[10028], 5.00th=[11338], 10.00th=[11469], 20.00th=[11600], 00:10:16.967 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12256], 60.00th=[12518], 00:10:16.967 | 70.00th=[13304], 80.00th=[18744], 90.00th=[23200], 95.00th=[23725], 00:10:16.967 | 99.00th=[28967], 99.50th=[30278], 99.90th=[32637], 99.95th=[32637], 00:10:16.967 | 99.99th=[33162] 00:10:16.967 write: IOPS=4645, BW=18.1MiB/s (19.0MB/s)(18.2MiB/1003msec); 0 zone resets 00:10:16.967 slat (usec): min=21, max=3605, avg=94.42, stdev=317.59 00:10:16.967 clat (usec): min=374, max=31056, avg=12854.96, stdev=3748.21 00:10:16.967 lat (usec): min=3268, max=31253, avg=12949.38, stdev=3778.03 00:10:16.967 clat percentiles (usec): 00:10:16.967 | 1.00th=[ 7177], 5.00th=[10814], 10.00th=[11207], 20.00th=[11338], 00:10:16.967 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:10:16.967 | 70.00th=[12125], 80.00th=[12387], 90.00th=[16188], 95.00th=[22676], 00:10:16.967 | 99.00th=[29492], 99.50th=[29754], 99.90th=[30540], 99.95th=[30802], 00:10:16.967 | 99.99th=[31065] 00:10:16.967 bw ( KiB/s): min=15776, max=21130, per=27.10%, avg=18453.00, stdev=3785.85, samples=2 00:10:16.967 iops : min= 3944, max= 5282, avg=4613.00, stdev=946.11, samples=2 00:10:16.967 lat (usec) : 500=0.01% 00:10:16.967 lat (msec) : 4=0.17%, 10=1.26%, 20=85.99%, 50=12.56% 00:10:16.967 cpu : usr=6.19%, sys=19.16%, ctx=596, majf=0, minf=11 00:10:16.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:16.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:16.967 issued rwts: total=4608,4659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:16.967 00:10:16.967 Run status group 0 (all jobs): 00:10:16.967 READ: bw=63.9MiB/s (67.0MB/s), 11.6MiB/s-20.0MiB/s (12.2MB/s-20.9MB/s), io=64.2MiB (67.3MB), run=1002-1004msec 00:10:16.967 WRITE: bw=66.5MiB/s (69.7MB/s), 12.0MiB/s-20.5MiB/s (12.5MB/s-21.5MB/s), io=66.8MiB (70.0MB), run=1002-1004msec 00:10:16.967 00:10:16.967 Disk stats (read/write): 00:10:16.967 nvme0n1: ios=3075/3072, merge=0/0, ticks=25302/24373, in_queue=49675, util=88.37% 00:10:16.967 nvme0n2: ios=2609/2765, merge=0/0, ticks=24486/23695, in_queue=48181, util=89.19% 00:10:16.967 nvme0n3: ios=4404/4608, merge=0/0, ticks=25885/20687, in_queue=46572, util=90.36% 00:10:16.967 nvme0n4: ios=4137/4452, merge=0/0, ticks=16729/14130, in_queue=30859, util=90.71% 00:10:16.967 22:21:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:16.967 22:21:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=68981 00:10:16.967 22:21:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:16.967 22:21:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:16.967 [global] 00:10:16.967 thread=1 00:10:16.967 invalidate=1 00:10:16.967 rw=read 00:10:16.967 time_based=1 00:10:16.967 runtime=10 00:10:16.967 ioengine=libaio 00:10:16.967 direct=1 00:10:16.967 bs=4096 00:10:16.967 iodepth=1 00:10:16.967 norandommap=1 00:10:16.967 numjobs=1 00:10:16.967 00:10:16.967 [job0] 00:10:16.967 filename=/dev/nvme0n1 00:10:16.967 [job1] 00:10:16.967 filename=/dev/nvme0n2 00:10:16.967 [job2] 00:10:16.967 filename=/dev/nvme0n3 00:10:16.967 [job3] 00:10:16.967 filename=/dev/nvme0n4 00:10:16.967 Could not set queue depth (nvme0n1) 00:10:16.967 Could not set queue depth (nvme0n2) 00:10:16.967 Could not set queue depth (nvme0n3) 00:10:16.967 Could not set queue depth (nvme0n4) 00:10:16.967 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.967 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.967 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.967 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.967 fio-3.35 00:10:16.967 Starting 4 threads 00:10:20.272 22:21:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:20.272 fio: pid=69024, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:20.272 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=48148480, buflen=4096 00:10:20.272 22:21:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:20.272 fio: pid=69023, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:20.272 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=52465664, buflen=4096 00:10:20.272 22:21:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:20.272 22:21:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:20.529 fio: pid=69021, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:20.529 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=56320000, buflen=4096 00:10:20.529 22:21:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:20.529 22:21:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:20.787 fio: pid=69022, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:20.787 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=65437696, buflen=4096 00:10:20.787 00:10:20.787 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69021: Mon Jul 15 22:21:34 2024 00:10:20.787 read: IOPS=4115, BW=16.1MiB/s (16.9MB/s)(53.7MiB/3341msec) 00:10:20.787 slat (usec): min=7, max=14248, avg=12.71, stdev=220.74 00:10:20.787 clat (usec): min=98, max=7572, avg=229.28, stdev=148.55 00:10:20.787 lat (usec): min=108, max=14506, avg=241.99, stdev=265.83 00:10:20.787 clat percentiles (usec): 00:10:20.787 | 1.00th=[ 124], 5.00th=[ 135], 10.00th=[ 153], 20.00th=[ 219], 00:10:20.787 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 237], 00:10:20.787 | 70.00th=[ 241], 80.00th=[ 245], 90.00th=[ 253], 95.00th=[ 262], 00:10:20.787 | 99.00th=[ 277], 99.50th=[ 293], 99.90th=[ 2024], 99.95th=[ 4015], 00:10:20.787 | 99.99th=[ 7439] 00:10:20.787 bw ( KiB/s): min=15304, max=16544, per=26.36%, avg=15957.00, stdev=607.28, samples=6 00:10:20.787 iops : min= 3826, max= 4136, avg=3989.17, stdev=151.73, samples=6 00:10:20.787 lat (usec) : 100=0.01%, 250=86.50%, 500=13.23%, 750=0.09%, 1000=0.04% 00:10:20.788 lat (msec) : 2=0.02%, 4=0.05%, 10=0.05% 00:10:20.788 cpu : usr=0.87%, sys=3.50%, ctx=13759, majf=0, minf=1 00:10:20.788 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.788 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.788 issued rwts: total=13751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.788 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.788 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69022: Mon Jul 15 22:21:34 2024 00:10:20.788 read: IOPS=4454, BW=17.4MiB/s (18.2MB/s)(62.4MiB/3587msec) 00:10:20.788 slat (usec): min=7, max=12513, avg=12.33, stdev=180.76 00:10:20.788 clat (usec): min=91, max=3300, avg=211.24, stdev=69.94 00:10:20.788 lat (usec): min=103, max=12743, avg=223.57, stdev=193.29 00:10:20.788 clat percentiles (usec): 00:10:20.788 | 1.00th=[ 105], 5.00th=[ 117], 10.00th=[ 127], 20.00th=[ 141], 00:10:20.788 | 30.00th=[ 215], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 235], 00:10:20.788 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 253], 95.00th=[ 262], 00:10:20.788 | 99.00th=[ 297], 99.50th=[ 424], 99.90th=[ 725], 99.95th=[ 996], 00:10:20.788 | 99.99th=[ 2704] 00:10:20.788 bw ( KiB/s): min=15560, max=16888, per=26.94%, avg=16311.83, stdev=494.25, samples=6 00:10:20.788 iops : min= 3890, max= 4222, avg=4077.83, stdev=123.61, samples=6 00:10:20.788 lat (usec) : 100=0.16%, 250=87.39%, 500=12.13%, 750=0.23%, 1000=0.04% 00:10:20.788 lat (msec) : 2=0.02%, 4=0.03% 00:10:20.788 cpu : usr=1.03%, sys=3.57%, ctx=15986, majf=0, minf=1 00:10:20.788 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.788 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.788 issued rwts: total=15977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.788 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.788 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69023: Mon Jul 15 22:21:34 2024 00:10:20.788 read: IOPS=4093, BW=16.0MiB/s (16.8MB/s)(50.0MiB/3129msec) 00:10:20.788 slat (usec): min=7, max=13519, avg=11.57, stdev=152.61 00:10:20.788 clat (usec): min=55, max=3276, avg=231.67, stdev=44.83 00:10:20.788 lat (usec): min=126, max=13767, avg=243.24, stdev=159.65 00:10:20.788 clat percentiles (usec): 00:10:20.788 | 1.00th=[ 139], 5.00th=[ 184], 10.00th=[ 212], 20.00th=[ 221], 00:10:20.788 | 30.00th=[ 227], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 237], 00:10:20.788 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 262], 00:10:20.788 | 99.00th=[ 281], 99.50th=[ 302], 99.90th=[ 404], 99.95th=[ 594], 00:10:20.788 | 99.99th=[ 1991] 00:10:20.788 bw ( KiB/s): min=16032, max=16720, per=27.10%, avg=16406.50, stdev=225.94, samples=6 00:10:20.788 iops : min= 4008, max= 4180, avg=4101.50, stdev=56.43, samples=6 00:10:20.788 lat (usec) : 100=0.01%, 250=85.81%, 500=14.10%, 750=0.04%, 1000=0.01% 00:10:20.788 lat (msec) : 2=0.02%, 4=0.01% 00:10:20.788 cpu : usr=1.05%, sys=3.52%, ctx=12815, majf=0, minf=1 00:10:20.788 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.788 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.788 issued rwts: total=12810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.788 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.788 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69024: Mon Jul 15 22:21:34 2024 00:10:20.788 read: IOPS=4046, BW=15.8MiB/s (16.6MB/s)(45.9MiB/2905msec) 00:10:20.788 slat (nsec): min=7540, max=97204, avg=9697.44, stdev=3455.11 00:10:20.788 clat (usec): min=123, max=1760, avg=236.34, stdev=28.78 00:10:20.788 lat (usec): min=136, max=1768, avg=246.04, stdev=28.71 00:10:20.788 clat percentiles (usec): 00:10:20.788 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 223], 00:10:20.788 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 239], 00:10:20.788 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 265], 00:10:20.788 | 99.00th=[ 289], 99.50th=[ 347], 99.90th=[ 586], 99.95th=[ 660], 00:10:20.788 | 99.99th=[ 840] 00:10:20.788 bw ( KiB/s): min=15648, max=16640, per=26.81%, avg=16230.20, stdev=398.42, samples=5 00:10:20.788 iops : min= 3912, max= 4160, avg=4057.40, stdev=99.64, samples=5 00:10:20.788 lat (usec) : 250=84.20%, 500=15.63%, 750=0.14%, 1000=0.02% 00:10:20.788 lat (msec) : 2=0.01% 00:10:20.788 cpu : usr=0.86%, sys=3.72%, ctx=11757, majf=0, minf=2 00:10:20.788 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.788 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.788 issued rwts: total=11756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.788 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.788 00:10:20.788 Run status group 0 (all jobs): 00:10:20.788 READ: bw=59.1MiB/s (62.0MB/s), 15.8MiB/s-17.4MiB/s (16.6MB/s-18.2MB/s), io=212MiB (222MB), run=2905-3587msec 00:10:20.788 00:10:20.788 Disk stats (read/write): 00:10:20.788 nvme0n1: ios=12401/0, merge=0/0, ticks=2946/0, in_queue=2946, util=94.45% 00:10:20.788 nvme0n2: ios=14331/0, merge=0/0, ticks=3188/0, in_queue=3188, util=95.01% 00:10:20.788 nvme0n3: ios=12733/0, merge=0/0, ticks=2951/0, in_queue=2951, util=96.05% 00:10:20.788 nvme0n4: ios=11581/0, merge=0/0, ticks=2753/0, in_queue=2753, util=96.79% 00:10:20.788 22:21:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:20.788 22:21:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:21.046 22:21:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:21.046 22:21:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:21.303 22:21:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:21.303 22:21:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:21.561 22:21:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:21.561 22:21:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:21.818 22:21:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:21.818 22:21:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:22.076 22:21:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:22.076 22:21:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 68981 00:10:22.076 22:21:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:22.076 22:21:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:22.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.076 22:21:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:22.076 22:21:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:22.076 22:21:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:22.076 22:21:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:22.076 22:21:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:22.076 22:21:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:22.076 nvmf hotplug test: fio failed as expected 00:10:22.076 22:21:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:22.076 22:21:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:22.076 22:21:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:22.076 22:21:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:22.334 22:21:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:22.334 22:21:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:22.334 22:21:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:22.334 22:21:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:22.334 22:21:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:22.334 22:21:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:22.334 22:21:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:22.334 22:21:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:22.334 22:21:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:22.334 22:21:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:22.334 22:21:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:22.334 rmmod nvme_tcp 00:10:22.334 rmmod nvme_fabrics 00:10:22.334 rmmod nvme_keyring 00:10:22.334 22:21:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:22.334 22:21:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:22.334 22:21:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:22.334 22:21:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 68605 ']' 00:10:22.334 22:21:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 68605 00:10:22.334 22:21:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 68605 ']' 00:10:22.334 22:21:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 68605 00:10:22.334 22:21:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:10:22.334 22:21:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:22.334 22:21:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68605 00:10:22.334 killing process with pid 68605 00:10:22.334 22:21:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:22.334 22:21:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:22.334 22:21:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68605' 00:10:22.334 22:21:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 68605 00:10:22.334 22:21:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 68605 00:10:22.591 22:21:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:22.591 22:21:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:22.591 22:21:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:22.591 22:21:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:22.591 22:21:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:22.591 22:21:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.591 22:21:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:22.591 22:21:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.591 22:21:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:22.591 ************************************ 00:10:22.591 END TEST nvmf_fio_target 00:10:22.591 ************************************ 00:10:22.591 00:10:22.591 real 0m18.819s 00:10:22.591 user 1m9.277s 00:10:22.591 sys 0m10.991s 00:10:22.591 22:21:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:22.591 22:21:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.591 22:21:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:22.591 22:21:36 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:22.591 22:21:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:22.591 22:21:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.591 22:21:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:22.849 ************************************ 00:10:22.849 START TEST nvmf_bdevio 00:10:22.849 ************************************ 00:10:22.849 22:21:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:22.849 * Looking for test storage... 00:10:22.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:22.849 22:21:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:22.849 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:22.849 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.849 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.849 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.849 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.849 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.849 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.849 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.849 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.849 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.849 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.849 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:10:22.849 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:10:22.849 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.849 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.849 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:22.849 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.849 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:22.849 22:21:36 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.849 22:21:36 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.849 22:21:36 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.849 22:21:36 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.849 22:21:36 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:22.850 Cannot find device "nvmf_tgt_br" 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:22.850 Cannot find device "nvmf_tgt_br2" 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:22.850 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:23.150 Cannot find device "nvmf_tgt_br" 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:23.150 Cannot find device "nvmf_tgt_br2" 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:23.150 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:23.150 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:23.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:23.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:10:23.150 00:10:23.150 --- 10.0.0.2 ping statistics --- 00:10:23.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.150 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:23.150 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:23.150 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:10:23.150 00:10:23.150 --- 10.0.0.3 ping statistics --- 00:10:23.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.150 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:10:23.150 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:23.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:23.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:23.408 00:10:23.408 --- 10.0.0.1 ping statistics --- 00:10:23.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.408 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:23.408 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:23.408 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:10:23.408 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:23.408 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:23.408 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:23.408 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:23.408 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:23.408 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:23.408 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:23.408 22:21:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:23.408 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:23.408 22:21:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:23.408 22:21:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:23.408 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:23.408 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=69292 00:10:23.408 22:21:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 69292 00:10:23.408 22:21:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 69292 ']' 00:10:23.408 22:21:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.408 22:21:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:23.408 22:21:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.408 22:21:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:23.408 22:21:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:23.408 [2024-07-15 22:21:36.862936] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:10:23.408 [2024-07-15 22:21:36.863008] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.408 [2024-07-15 22:21:36.998568] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:23.665 [2024-07-15 22:21:37.098734] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:23.665 [2024-07-15 22:21:37.098787] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:23.665 [2024-07-15 22:21:37.098797] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:23.666 [2024-07-15 22:21:37.098805] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:23.666 [2024-07-15 22:21:37.098812] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:23.666 [2024-07-15 22:21:37.098929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:23.666 [2024-07-15 22:21:37.099045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:23.666 [2024-07-15 22:21:37.099581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:23.666 [2024-07-15 22:21:37.099632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:23.666 [2024-07-15 22:21:37.142355] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:24.232 22:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:24.232 22:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:10:24.232 22:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:24.232 22:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:24.232 22:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:24.232 22:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:24.232 22:21:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:24.232 22:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.232 22:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:24.232 [2024-07-15 22:21:37.799874] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:24.232 22:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.232 22:21:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:24.232 22:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.232 22:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:24.232 Malloc0 00:10:24.232 22:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.232 22:21:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:24.232 22:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.232 22:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:24.232 22:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.232 22:21:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:24.232 22:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.232 22:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:24.490 22:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.490 22:21:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:24.490 22:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.490 22:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:24.490 [2024-07-15 22:21:37.878818] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:24.490 22:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.490 22:21:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:24.490 22:21:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:24.490 22:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:24.490 22:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:24.490 22:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:24.490 22:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:24.490 { 00:10:24.490 "params": { 00:10:24.490 "name": "Nvme$subsystem", 00:10:24.490 "trtype": "$TEST_TRANSPORT", 00:10:24.490 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:24.490 "adrfam": "ipv4", 00:10:24.490 "trsvcid": "$NVMF_PORT", 00:10:24.490 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:24.490 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:24.490 "hdgst": ${hdgst:-false}, 00:10:24.490 "ddgst": ${ddgst:-false} 00:10:24.490 }, 00:10:24.490 "method": "bdev_nvme_attach_controller" 00:10:24.490 } 00:10:24.490 EOF 00:10:24.490 )") 00:10:24.490 22:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:24.490 22:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:24.490 22:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:24.490 22:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:24.490 "params": { 00:10:24.490 "name": "Nvme1", 00:10:24.490 "trtype": "tcp", 00:10:24.490 "traddr": "10.0.0.2", 00:10:24.490 "adrfam": "ipv4", 00:10:24.490 "trsvcid": "4420", 00:10:24.490 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:24.490 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:24.490 "hdgst": false, 00:10:24.490 "ddgst": false 00:10:24.490 }, 00:10:24.490 "method": "bdev_nvme_attach_controller" 00:10:24.490 }' 00:10:24.490 [2024-07-15 22:21:37.939208] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:10:24.490 [2024-07-15 22:21:37.939310] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69328 ] 00:10:24.490 [2024-07-15 22:21:38.103505] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:24.747 [2024-07-15 22:21:38.207692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.747 [2024-07-15 22:21:38.207810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.747 [2024-07-15 22:21:38.207810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.747 [2024-07-15 22:21:38.260457] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:24.747 I/O targets: 00:10:24.747 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:24.747 00:10:24.747 00:10:24.747 CUnit - A unit testing framework for C - Version 2.1-3 00:10:24.747 http://cunit.sourceforge.net/ 00:10:24.747 00:10:24.747 00:10:24.747 Suite: bdevio tests on: Nvme1n1 00:10:24.747 Test: blockdev write read block ...passed 00:10:24.747 Test: blockdev write zeroes read block ...passed 00:10:24.747 Test: blockdev write zeroes read no split ...passed 00:10:25.005 Test: blockdev write zeroes read split ...passed 00:10:25.005 Test: blockdev write zeroes read split partial ...passed 00:10:25.005 Test: blockdev reset ...[2024-07-15 22:21:38.398563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:25.005 [2024-07-15 22:21:38.398683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2417730 (9): Bad file descriptor 00:10:25.005 [2024-07-15 22:21:38.416303] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:25.005 passed 00:10:25.005 Test: blockdev write read 8 blocks ...passed 00:10:25.005 Test: blockdev write read size > 128k ...passed 00:10:25.005 Test: blockdev write read invalid size ...passed 00:10:25.005 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:25.005 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:25.005 Test: blockdev write read max offset ...passed 00:10:25.005 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:25.005 Test: blockdev writev readv 8 blocks ...passed 00:10:25.005 Test: blockdev writev readv 30 x 1block ...passed 00:10:25.005 Test: blockdev writev readv block ...passed 00:10:25.005 Test: blockdev writev readv size > 128k ...passed 00:10:25.005 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:25.005 Test: blockdev comparev and writev ...[2024-07-15 22:21:38.423621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:25.005 [2024-07-15 22:21:38.423671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:25.005 [2024-07-15 22:21:38.423688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:25.005 [2024-07-15 22:21:38.423699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:25.005 [2024-07-15 22:21:38.424101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:25.005 [2024-07-15 22:21:38.424120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:25.005 [2024-07-15 22:21:38.424134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:25.005 [2024-07-15 22:21:38.424144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:25.005 [2024-07-15 22:21:38.424431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:25.005 [2024-07-15 22:21:38.424450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:25.005 [2024-07-15 22:21:38.424465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:25.005 [2024-07-15 22:21:38.424475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:25.005 [2024-07-15 22:21:38.424963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:25.005 [2024-07-15 22:21:38.424985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:25.005 [2024-07-15 22:21:38.425000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:25.005 [2024-07-15 22:21:38.425009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:25.005 passed 00:10:25.005 Test: blockdev nvme passthru rw ...passed 00:10:25.005 Test: blockdev nvme passthru vendor specific ...[2024-07-15 22:21:38.425864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:25.005 [2024-07-15 22:21:38.425892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:25.005 [2024-07-15 22:21:38.425979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:25.006 [2024-07-15 22:21:38.425992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:25.006 [2024-07-15 22:21:38.426074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:25.006 [2024-07-15 22:21:38.426085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:25.006 [2024-07-15 22:21:38.426174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:25.006 [2024-07-15 22:21:38.426186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:25.006 passed 00:10:25.006 Test: blockdev nvme admin passthru ...passed 00:10:25.006 Test: blockdev copy ...passed 00:10:25.006 00:10:25.006 Run Summary: Type Total Ran Passed Failed Inactive 00:10:25.006 suites 1 1 n/a 0 0 00:10:25.006 tests 23 23 23 0 0 00:10:25.006 asserts 152 152 152 0 n/a 00:10:25.006 00:10:25.006 Elapsed time = 0.146 seconds 00:10:25.006 22:21:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:25.006 22:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.006 22:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:25.006 22:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.006 22:21:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:25.006 22:21:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:25.006 22:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:25.006 22:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:25.263 22:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:25.263 22:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:25.263 22:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:25.263 22:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:25.263 rmmod nvme_tcp 00:10:25.263 rmmod nvme_fabrics 00:10:25.263 rmmod nvme_keyring 00:10:25.263 22:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:25.263 22:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:25.263 22:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:25.263 22:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 69292 ']' 00:10:25.263 22:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 69292 00:10:25.263 22:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 69292 ']' 00:10:25.263 22:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 69292 00:10:25.263 22:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:10:25.263 22:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:25.263 22:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69292 00:10:25.263 22:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:10:25.263 22:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:10:25.263 killing process with pid 69292 00:10:25.263 22:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69292' 00:10:25.263 22:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 69292 00:10:25.264 22:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 69292 00:10:25.520 22:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:25.521 22:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:25.521 22:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:25.521 22:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:25.521 22:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:25.521 22:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.521 22:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:25.521 22:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.521 22:21:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:25.521 00:10:25.521 real 0m2.820s 00:10:25.521 user 0m8.776s 00:10:25.521 sys 0m0.854s 00:10:25.521 22:21:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:25.521 22:21:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:25.521 ************************************ 00:10:25.521 END TEST nvmf_bdevio 00:10:25.521 ************************************ 00:10:25.521 22:21:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:25.521 22:21:39 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:25.521 22:21:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:25.521 22:21:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.521 22:21:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:25.521 ************************************ 00:10:25.521 START TEST nvmf_auth_target 00:10:25.521 ************************************ 00:10:25.521 22:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:25.778 * Looking for test storage... 00:10:25.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:25.778 22:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:25.778 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:25.778 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.778 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.778 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.778 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.778 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.778 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.778 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:25.779 Cannot find device "nvmf_tgt_br" 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:25.779 Cannot find device "nvmf_tgt_br2" 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:25.779 Cannot find device "nvmf_tgt_br" 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:25.779 Cannot find device "nvmf_tgt_br2" 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:10:25.779 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:26.036 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:26.036 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:26.036 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:26.036 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:26.036 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:26.036 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:26.036 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:26.036 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:26.036 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:26.036 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:26.036 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:26.036 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:26.036 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:26.036 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:26.036 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:26.036 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:26.036 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:26.036 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:26.036 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:26.036 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:26.036 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:26.036 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:26.036 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:26.036 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:26.036 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:26.036 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:26.036 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:26.292 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:26.292 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:26.292 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:26.292 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:26.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:26.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:10:26.292 00:10:26.292 --- 10.0.0.2 ping statistics --- 00:10:26.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.292 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:10:26.292 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:26.292 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:26.292 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:10:26.292 00:10:26.292 --- 10.0.0.3 ping statistics --- 00:10:26.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.292 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:10:26.292 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:26.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:26.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:10:26.292 00:10:26.292 --- 10.0.0.1 ping statistics --- 00:10:26.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.292 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:10:26.292 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:26.292 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:10:26.293 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:26.293 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:26.293 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:26.293 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:26.293 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:26.293 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:26.293 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:26.293 22:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:10:26.293 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:26.293 22:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:26.293 22:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.293 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=69492 00:10:26.293 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:26.293 22:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 69492 00:10:26.293 22:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69492 ']' 00:10:26.293 22:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.293 22:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:26.293 22:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.293 22:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:26.293 22:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=69530 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e09d533f2df366597ac3b4eaa20f4c990a293c543f669bad 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.pgd 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e09d533f2df366597ac3b4eaa20f4c990a293c543f669bad 0 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e09d533f2df366597ac3b4eaa20f4c990a293c543f669bad 0 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e09d533f2df366597ac3b4eaa20f4c990a293c543f669bad 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.pgd 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.pgd 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.pgd 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=03b9d6d1ce0fcb1475f9f1aed4e45d4cd6e8734e8e0c466b840a3c5050410656 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.1ty 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 03b9d6d1ce0fcb1475f9f1aed4e45d4cd6e8734e8e0c466b840a3c5050410656 3 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 03b9d6d1ce0fcb1475f9f1aed4e45d4cd6e8734e8e0c466b840a3c5050410656 3 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=03b9d6d1ce0fcb1475f9f1aed4e45d4cd6e8734e8e0c466b840a3c5050410656 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:27.224 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:27.480 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.1ty 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.1ty 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.1ty 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=596cae2b7887301a5662a84280c2cba8 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.STR 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 596cae2b7887301a5662a84280c2cba8 1 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 596cae2b7887301a5662a84280c2cba8 1 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=596cae2b7887301a5662a84280c2cba8 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.STR 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.STR 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.STR 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ee32be3820ecc1959b0ddb2810bf55222c7a478407c2e464 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.FUf 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ee32be3820ecc1959b0ddb2810bf55222c7a478407c2e464 2 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ee32be3820ecc1959b0ddb2810bf55222c7a478407c2e464 2 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ee32be3820ecc1959b0ddb2810bf55222c7a478407c2e464 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.FUf 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.FUf 00:10:27.481 22:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.FUf 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=18157cbc5169fef5d5155b6addafefc661ea06127c68d52b 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.32e 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 18157cbc5169fef5d5155b6addafefc661ea06127c68d52b 2 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 18157cbc5169fef5d5155b6addafefc661ea06127c68d52b 2 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=18157cbc5169fef5d5155b6addafefc661ea06127c68d52b 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.32e 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.32e 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.32e 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b9ee1293e3674c84fd70ac45558bb7a0 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.UgK 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b9ee1293e3674c84fd70ac45558bb7a0 1 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b9ee1293e3674c84fd70ac45558bb7a0 1 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b9ee1293e3674c84fd70ac45558bb7a0 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:27.481 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.UgK 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.UgK 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.UgK 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f9c65b6f98f6355348c6a1de16678f6f4292b82332a6714506520210ec33a41e 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.zjO 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f9c65b6f98f6355348c6a1de16678f6f4292b82332a6714506520210ec33a41e 3 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f9c65b6f98f6355348c6a1de16678f6f4292b82332a6714506520210ec33a41e 3 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f9c65b6f98f6355348c6a1de16678f6f4292b82332a6714506520210ec33a41e 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.zjO 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.zjO 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.zjO 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 69492 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69492 ']' 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:27.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:27.739 22:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.997 22:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:27.997 22:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:27.997 22:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 69530 /var/tmp/host.sock 00:10:27.997 22:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69530 ']' 00:10:27.997 22:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:10:27.997 22:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:27.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:27.997 22:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:27.997 22:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:27.997 22:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.255 22:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:28.255 22:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:28.255 22:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:10:28.255 22:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.255 22:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.255 22:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.255 22:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:28.255 22:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pgd 00:10:28.255 22:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.255 22:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.255 22:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.255 22:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.pgd 00:10:28.255 22:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.pgd 00:10:28.512 22:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.1ty ]] 00:10:28.512 22:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1ty 00:10:28.512 22:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.512 22:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.512 22:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.512 22:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1ty 00:10:28.512 22:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1ty 00:10:28.512 22:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:28.512 22:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.STR 00:10:28.512 22:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.512 22:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.512 22:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.512 22:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.STR 00:10:28.512 22:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.STR 00:10:28.770 22:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.FUf ]] 00:10:28.770 22:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FUf 00:10:28.770 22:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.770 22:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.770 22:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.770 22:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FUf 00:10:28.770 22:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FUf 00:10:29.028 22:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:29.028 22:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.32e 00:10:29.028 22:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.028 22:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.028 22:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.028 22:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.32e 00:10:29.028 22:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.32e 00:10:29.286 22:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.UgK ]] 00:10:29.286 22:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UgK 00:10:29.286 22:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.286 22:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.286 22:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.286 22:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UgK 00:10:29.286 22:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UgK 00:10:29.544 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:29.544 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.zjO 00:10:29.544 22:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.544 22:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.544 22:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.544 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.zjO 00:10:29.544 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.zjO 00:10:29.809 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:10:29.809 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:29.809 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:29.809 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:29.809 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:29.809 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:30.078 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:10:30.078 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:30.078 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:30.078 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:30.078 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:30.078 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:30.078 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:30.078 22:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.078 22:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.078 22:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.078 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:30.078 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:30.078 00:10:30.336 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:30.336 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:30.336 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:30.336 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:30.336 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:30.336 22:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.336 22:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.336 22:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.336 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:30.336 { 00:10:30.336 "cntlid": 1, 00:10:30.336 "qid": 0, 00:10:30.336 "state": "enabled", 00:10:30.336 "thread": "nvmf_tgt_poll_group_000", 00:10:30.336 "listen_address": { 00:10:30.336 "trtype": "TCP", 00:10:30.336 "adrfam": "IPv4", 00:10:30.336 "traddr": "10.0.0.2", 00:10:30.336 "trsvcid": "4420" 00:10:30.336 }, 00:10:30.336 "peer_address": { 00:10:30.336 "trtype": "TCP", 00:10:30.336 "adrfam": "IPv4", 00:10:30.336 "traddr": "10.0.0.1", 00:10:30.336 "trsvcid": "59272" 00:10:30.336 }, 00:10:30.336 "auth": { 00:10:30.336 "state": "completed", 00:10:30.336 "digest": "sha256", 00:10:30.336 "dhgroup": "null" 00:10:30.336 } 00:10:30.336 } 00:10:30.336 ]' 00:10:30.336 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:30.595 22:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:30.595 22:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:30.595 22:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:30.595 22:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:30.595 22:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:30.595 22:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:30.595 22:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:30.854 22:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:00:ZTA5ZDUzM2YyZGYzNjY1OTdhYzNiNGVhYTIwZjRjOTkwYTI5M2M1NDNmNjY5YmFkLnACuw==: --dhchap-ctrl-secret DHHC-1:03:MDNiOWQ2ZDFjZTBmY2IxNDc1ZjlmMWFlZDRlNDVkNGNkNmU4NzM0ZThlMGM0NjZiODQwYTNjNTA1MDQxMDY1Npgk4os=: 00:10:35.037 22:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:35.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:35.037 22:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:10:35.037 22:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.037 22:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.037 22:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.037 22:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:35.037 22:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:35.037 22:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:35.037 22:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:10:35.037 22:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:35.037 22:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:35.037 22:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:35.037 22:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:35.037 22:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:35.037 22:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:35.037 22:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.037 22:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.037 22:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.037 22:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:35.037 22:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:35.037 00:10:35.037 22:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:35.037 22:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:35.037 22:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:35.037 22:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:35.037 22:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:35.037 22:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.037 22:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.037 22:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.037 22:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:35.037 { 00:10:35.037 "cntlid": 3, 00:10:35.037 "qid": 0, 00:10:35.037 "state": "enabled", 00:10:35.037 "thread": "nvmf_tgt_poll_group_000", 00:10:35.037 "listen_address": { 00:10:35.037 "trtype": "TCP", 00:10:35.037 "adrfam": "IPv4", 00:10:35.037 "traddr": "10.0.0.2", 00:10:35.037 "trsvcid": "4420" 00:10:35.037 }, 00:10:35.037 "peer_address": { 00:10:35.037 "trtype": "TCP", 00:10:35.037 "adrfam": "IPv4", 00:10:35.037 "traddr": "10.0.0.1", 00:10:35.037 "trsvcid": "59306" 00:10:35.037 }, 00:10:35.037 "auth": { 00:10:35.037 "state": "completed", 00:10:35.037 "digest": "sha256", 00:10:35.037 "dhgroup": "null" 00:10:35.037 } 00:10:35.037 } 00:10:35.037 ]' 00:10:35.037 22:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:35.294 22:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:35.294 22:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:35.294 22:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:35.294 22:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:35.294 22:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:35.294 22:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:35.294 22:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:35.552 22:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:01:NTk2Y2FlMmI3ODg3MzAxYTU2NjJhODQyODBjMmNiYTj5Z2bJ: --dhchap-ctrl-secret DHHC-1:02:ZWUzMmJlMzgyMGVjYzE5NTliMGRkYjI4MTBiZjU1MjIyYzdhNDc4NDA3YzJlNDY0Dg5wyw==: 00:10:36.120 22:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:36.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:36.120 22:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:10:36.120 22:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.120 22:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.120 22:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.120 22:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:36.120 22:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:36.120 22:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:36.379 22:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:10:36.379 22:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:36.379 22:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:36.379 22:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:36.379 22:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:36.379 22:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:36.379 22:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:36.379 22:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.379 22:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.379 22:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.379 22:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:36.379 22:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:36.638 00:10:36.638 22:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:36.638 22:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:36.638 22:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:36.897 22:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:36.897 22:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:36.897 22:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.897 22:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.897 22:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.897 22:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:36.897 { 00:10:36.897 "cntlid": 5, 00:10:36.897 "qid": 0, 00:10:36.897 "state": "enabled", 00:10:36.897 "thread": "nvmf_tgt_poll_group_000", 00:10:36.897 "listen_address": { 00:10:36.897 "trtype": "TCP", 00:10:36.897 "adrfam": "IPv4", 00:10:36.897 "traddr": "10.0.0.2", 00:10:36.897 "trsvcid": "4420" 00:10:36.897 }, 00:10:36.897 "peer_address": { 00:10:36.897 "trtype": "TCP", 00:10:36.897 "adrfam": "IPv4", 00:10:36.897 "traddr": "10.0.0.1", 00:10:36.897 "trsvcid": "59338" 00:10:36.897 }, 00:10:36.897 "auth": { 00:10:36.897 "state": "completed", 00:10:36.897 "digest": "sha256", 00:10:36.897 "dhgroup": "null" 00:10:36.897 } 00:10:36.897 } 00:10:36.897 ]' 00:10:36.897 22:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:36.897 22:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:36.897 22:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:36.897 22:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:36.897 22:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:36.897 22:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:36.897 22:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:36.897 22:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:37.156 22:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:02:MTgxNTdjYmM1MTY5ZmVmNWQ1MTU1YjZhZGRhZmVmYzY2MWVhMDYxMjdjNjhkNTJip2Pi4g==: --dhchap-ctrl-secret DHHC-1:01:YjllZTEyOTNlMzY3NGM4NGZkNzBhYzQ1NTU4YmI3YTCoVxs0: 00:10:37.723 22:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:37.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:37.723 22:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:10:37.723 22:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.723 22:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.723 22:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.723 22:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:37.723 22:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:37.723 22:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:38.028 22:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:10:38.028 22:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:38.028 22:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:38.028 22:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:38.028 22:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:38.028 22:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.028 22:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key3 00:10:38.028 22:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.028 22:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.028 22:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.028 22:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:38.028 22:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:38.285 00:10:38.285 22:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:38.285 22:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:38.285 22:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:38.543 22:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:38.543 22:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:38.543 22:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.543 22:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.543 22:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.543 22:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:38.543 { 00:10:38.543 "cntlid": 7, 00:10:38.543 "qid": 0, 00:10:38.543 "state": "enabled", 00:10:38.543 "thread": "nvmf_tgt_poll_group_000", 00:10:38.543 "listen_address": { 00:10:38.543 "trtype": "TCP", 00:10:38.543 "adrfam": "IPv4", 00:10:38.543 "traddr": "10.0.0.2", 00:10:38.543 "trsvcid": "4420" 00:10:38.543 }, 00:10:38.543 "peer_address": { 00:10:38.543 "trtype": "TCP", 00:10:38.543 "adrfam": "IPv4", 00:10:38.543 "traddr": "10.0.0.1", 00:10:38.543 "trsvcid": "46996" 00:10:38.543 }, 00:10:38.543 "auth": { 00:10:38.543 "state": "completed", 00:10:38.543 "digest": "sha256", 00:10:38.543 "dhgroup": "null" 00:10:38.543 } 00:10:38.543 } 00:10:38.543 ]' 00:10:38.543 22:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:38.543 22:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:38.543 22:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:38.543 22:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:38.543 22:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:38.543 22:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:38.543 22:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:38.543 22:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:38.801 22:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:03:ZjljNjViNmY5OGY2MzU1MzQ4YzZhMWRlMTY2NzhmNmY0MjkyYjgyMzMyYTY3MTQ1MDY1MjAyMTBlYzMzYTQxZaD2MKQ=: 00:10:39.366 22:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:39.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:39.366 22:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:10:39.366 22:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.366 22:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.366 22:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.366 22:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:39.366 22:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:39.366 22:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:39.366 22:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:39.624 22:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:10:39.624 22:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:39.624 22:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:39.624 22:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:39.624 22:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:39.624 22:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:39.624 22:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.624 22:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.624 22:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.624 22:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.624 22:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.624 22:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.881 00:10:39.881 22:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:39.881 22:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:39.881 22:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:40.138 22:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:40.138 22:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:40.138 22:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.138 22:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.138 22:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.138 22:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:40.138 { 00:10:40.138 "cntlid": 9, 00:10:40.138 "qid": 0, 00:10:40.138 "state": "enabled", 00:10:40.138 "thread": "nvmf_tgt_poll_group_000", 00:10:40.138 "listen_address": { 00:10:40.138 "trtype": "TCP", 00:10:40.138 "adrfam": "IPv4", 00:10:40.138 "traddr": "10.0.0.2", 00:10:40.138 "trsvcid": "4420" 00:10:40.138 }, 00:10:40.138 "peer_address": { 00:10:40.138 "trtype": "TCP", 00:10:40.138 "adrfam": "IPv4", 00:10:40.138 "traddr": "10.0.0.1", 00:10:40.138 "trsvcid": "47038" 00:10:40.138 }, 00:10:40.138 "auth": { 00:10:40.138 "state": "completed", 00:10:40.138 "digest": "sha256", 00:10:40.138 "dhgroup": "ffdhe2048" 00:10:40.138 } 00:10:40.138 } 00:10:40.138 ]' 00:10:40.138 22:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:40.138 22:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:40.138 22:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:40.138 22:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:40.138 22:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:40.394 22:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:40.394 22:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:40.395 22:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:40.395 22:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:00:ZTA5ZDUzM2YyZGYzNjY1OTdhYzNiNGVhYTIwZjRjOTkwYTI5M2M1NDNmNjY5YmFkLnACuw==: --dhchap-ctrl-secret DHHC-1:03:MDNiOWQ2ZDFjZTBmY2IxNDc1ZjlmMWFlZDRlNDVkNGNkNmU4NzM0ZThlMGM0NjZiODQwYTNjNTA1MDQxMDY1Npgk4os=: 00:10:40.959 22:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:40.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:40.959 22:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:10:40.959 22:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.960 22:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.960 22:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.960 22:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:40.960 22:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:40.960 22:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:41.217 22:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:10:41.217 22:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:41.217 22:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:41.217 22:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:41.217 22:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:41.217 22:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:41.217 22:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.217 22:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.217 22:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.217 22:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.217 22:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.217 22:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.505 00:10:41.505 22:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:41.505 22:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:41.505 22:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:41.764 22:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:41.764 22:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:41.764 22:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.764 22:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.764 22:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.764 22:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:41.764 { 00:10:41.764 "cntlid": 11, 00:10:41.764 "qid": 0, 00:10:41.764 "state": "enabled", 00:10:41.764 "thread": "nvmf_tgt_poll_group_000", 00:10:41.764 "listen_address": { 00:10:41.764 "trtype": "TCP", 00:10:41.764 "adrfam": "IPv4", 00:10:41.764 "traddr": "10.0.0.2", 00:10:41.764 "trsvcid": "4420" 00:10:41.764 }, 00:10:41.764 "peer_address": { 00:10:41.764 "trtype": "TCP", 00:10:41.764 "adrfam": "IPv4", 00:10:41.764 "traddr": "10.0.0.1", 00:10:41.764 "trsvcid": "47084" 00:10:41.764 }, 00:10:41.764 "auth": { 00:10:41.764 "state": "completed", 00:10:41.764 "digest": "sha256", 00:10:41.764 "dhgroup": "ffdhe2048" 00:10:41.764 } 00:10:41.764 } 00:10:41.764 ]' 00:10:41.764 22:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:42.023 22:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:42.023 22:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:42.023 22:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:42.023 22:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:42.023 22:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:42.023 22:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:42.023 22:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:42.281 22:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:01:NTk2Y2FlMmI3ODg3MzAxYTU2NjJhODQyODBjMmNiYTj5Z2bJ: --dhchap-ctrl-secret DHHC-1:02:ZWUzMmJlMzgyMGVjYzE5NTliMGRkYjI4MTBiZjU1MjIyYzdhNDc4NDA3YzJlNDY0Dg5wyw==: 00:10:42.848 22:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:42.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:42.848 22:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:10:42.848 22:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.848 22:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.848 22:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.848 22:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:42.848 22:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:42.848 22:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:42.848 22:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:10:42.848 22:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:42.848 22:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:42.848 22:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:42.848 22:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:42.848 22:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.848 22:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.848 22:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.848 22:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.848 22:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.848 22:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.849 22:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:43.108 00:10:43.367 22:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:43.367 22:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:43.367 22:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:43.367 22:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:43.367 22:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:43.367 22:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.367 22:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.367 22:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.367 22:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:43.367 { 00:10:43.367 "cntlid": 13, 00:10:43.367 "qid": 0, 00:10:43.367 "state": "enabled", 00:10:43.367 "thread": "nvmf_tgt_poll_group_000", 00:10:43.367 "listen_address": { 00:10:43.367 "trtype": "TCP", 00:10:43.367 "adrfam": "IPv4", 00:10:43.367 "traddr": "10.0.0.2", 00:10:43.367 "trsvcid": "4420" 00:10:43.367 }, 00:10:43.367 "peer_address": { 00:10:43.367 "trtype": "TCP", 00:10:43.367 "adrfam": "IPv4", 00:10:43.367 "traddr": "10.0.0.1", 00:10:43.367 "trsvcid": "47116" 00:10:43.367 }, 00:10:43.367 "auth": { 00:10:43.367 "state": "completed", 00:10:43.367 "digest": "sha256", 00:10:43.367 "dhgroup": "ffdhe2048" 00:10:43.367 } 00:10:43.367 } 00:10:43.367 ]' 00:10:43.367 22:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:43.626 22:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:43.626 22:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:43.626 22:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:43.626 22:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:43.626 22:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:43.626 22:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:43.626 22:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:43.626 22:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:02:MTgxNTdjYmM1MTY5ZmVmNWQ1MTU1YjZhZGRhZmVmYzY2MWVhMDYxMjdjNjhkNTJip2Pi4g==: --dhchap-ctrl-secret DHHC-1:01:YjllZTEyOTNlMzY3NGM4NGZkNzBhYzQ1NTU4YmI3YTCoVxs0: 00:10:44.193 22:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:44.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:44.193 22:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:10:44.193 22:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.193 22:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.193 22:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.193 22:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:44.193 22:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:44.193 22:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:44.451 22:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:10:44.451 22:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:44.451 22:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:44.451 22:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:44.451 22:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:44.451 22:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.451 22:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key3 00:10:44.451 22:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.451 22:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.451 22:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.451 22:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:44.451 22:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:44.709 00:10:44.709 22:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:44.709 22:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.709 22:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:44.967 22:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:44.967 22:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:44.967 22:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.967 22:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.967 22:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.967 22:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:44.967 { 00:10:44.967 "cntlid": 15, 00:10:44.967 "qid": 0, 00:10:44.967 "state": "enabled", 00:10:44.967 "thread": "nvmf_tgt_poll_group_000", 00:10:44.967 "listen_address": { 00:10:44.967 "trtype": "TCP", 00:10:44.967 "adrfam": "IPv4", 00:10:44.967 "traddr": "10.0.0.2", 00:10:44.967 "trsvcid": "4420" 00:10:44.967 }, 00:10:44.967 "peer_address": { 00:10:44.967 "trtype": "TCP", 00:10:44.967 "adrfam": "IPv4", 00:10:44.967 "traddr": "10.0.0.1", 00:10:44.967 "trsvcid": "47138" 00:10:44.967 }, 00:10:44.967 "auth": { 00:10:44.967 "state": "completed", 00:10:44.967 "digest": "sha256", 00:10:44.967 "dhgroup": "ffdhe2048" 00:10:44.967 } 00:10:44.967 } 00:10:44.967 ]' 00:10:44.967 22:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:44.967 22:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:44.967 22:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:45.225 22:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:45.225 22:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:45.225 22:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.225 22:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.225 22:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.483 22:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:03:ZjljNjViNmY5OGY2MzU1MzQ4YzZhMWRlMTY2NzhmNmY0MjkyYjgyMzMyYTY3MTQ1MDY1MjAyMTBlYzMzYTQxZaD2MKQ=: 00:10:46.049 22:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:46.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:46.049 22:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:10:46.049 22:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.049 22:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.049 22:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.049 22:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:46.049 22:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:46.049 22:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:46.049 22:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:46.049 22:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:10:46.049 22:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:46.049 22:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:46.049 22:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:46.049 22:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:46.049 22:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:46.049 22:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.049 22:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.049 22:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.049 22:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.049 22:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.049 22:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.308 00:10:46.566 22:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:46.566 22:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:46.566 22:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:46.566 22:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:46.566 22:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:46.566 22:22:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.566 22:22:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.566 22:22:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.566 22:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:46.566 { 00:10:46.566 "cntlid": 17, 00:10:46.566 "qid": 0, 00:10:46.566 "state": "enabled", 00:10:46.566 "thread": "nvmf_tgt_poll_group_000", 00:10:46.566 "listen_address": { 00:10:46.566 "trtype": "TCP", 00:10:46.566 "adrfam": "IPv4", 00:10:46.566 "traddr": "10.0.0.2", 00:10:46.566 "trsvcid": "4420" 00:10:46.566 }, 00:10:46.566 "peer_address": { 00:10:46.566 "trtype": "TCP", 00:10:46.566 "adrfam": "IPv4", 00:10:46.566 "traddr": "10.0.0.1", 00:10:46.566 "trsvcid": "47162" 00:10:46.566 }, 00:10:46.566 "auth": { 00:10:46.566 "state": "completed", 00:10:46.566 "digest": "sha256", 00:10:46.566 "dhgroup": "ffdhe3072" 00:10:46.566 } 00:10:46.566 } 00:10:46.566 ]' 00:10:46.566 22:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:46.828 22:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:46.828 22:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:46.828 22:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:46.828 22:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:46.828 22:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:46.828 22:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:46.828 22:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.099 22:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:00:ZTA5ZDUzM2YyZGYzNjY1OTdhYzNiNGVhYTIwZjRjOTkwYTI5M2M1NDNmNjY5YmFkLnACuw==: --dhchap-ctrl-secret DHHC-1:03:MDNiOWQ2ZDFjZTBmY2IxNDc1ZjlmMWFlZDRlNDVkNGNkNmU4NzM0ZThlMGM0NjZiODQwYTNjNTA1MDQxMDY1Npgk4os=: 00:10:47.700 22:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:47.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:47.700 22:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:10:47.700 22:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.700 22:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.700 22:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.700 22:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:47.700 22:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:47.700 22:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:47.700 22:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:10:47.700 22:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:47.700 22:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:47.700 22:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:47.700 22:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:47.700 22:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:47.700 22:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.700 22:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.700 22:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.700 22:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.700 22:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.700 22:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.965 00:10:48.225 22:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:48.225 22:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:48.225 22:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:48.225 22:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:48.225 22:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:48.225 22:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.225 22:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.484 22:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.484 22:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:48.484 { 00:10:48.484 "cntlid": 19, 00:10:48.484 "qid": 0, 00:10:48.484 "state": "enabled", 00:10:48.484 "thread": "nvmf_tgt_poll_group_000", 00:10:48.484 "listen_address": { 00:10:48.484 "trtype": "TCP", 00:10:48.484 "adrfam": "IPv4", 00:10:48.484 "traddr": "10.0.0.2", 00:10:48.484 "trsvcid": "4420" 00:10:48.484 }, 00:10:48.484 "peer_address": { 00:10:48.484 "trtype": "TCP", 00:10:48.484 "adrfam": "IPv4", 00:10:48.484 "traddr": "10.0.0.1", 00:10:48.484 "trsvcid": "47176" 00:10:48.484 }, 00:10:48.484 "auth": { 00:10:48.484 "state": "completed", 00:10:48.484 "digest": "sha256", 00:10:48.484 "dhgroup": "ffdhe3072" 00:10:48.484 } 00:10:48.484 } 00:10:48.484 ]' 00:10:48.484 22:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:48.484 22:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:48.484 22:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:48.484 22:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:48.484 22:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:48.484 22:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:48.484 22:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:48.484 22:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:48.742 22:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:01:NTk2Y2FlMmI3ODg3MzAxYTU2NjJhODQyODBjMmNiYTj5Z2bJ: --dhchap-ctrl-secret DHHC-1:02:ZWUzMmJlMzgyMGVjYzE5NTliMGRkYjI4MTBiZjU1MjIyYzdhNDc4NDA3YzJlNDY0Dg5wyw==: 00:10:49.309 22:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:49.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:49.309 22:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:10:49.309 22:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.309 22:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.309 22:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.309 22:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:49.309 22:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:49.309 22:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:49.576 22:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:10:49.576 22:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:49.576 22:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:49.576 22:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:49.576 22:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:49.576 22:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:49.576 22:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.576 22:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.576 22:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.576 22:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.576 22:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.576 22:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.835 00:10:49.835 22:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:49.835 22:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:49.835 22:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:50.092 22:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:50.092 22:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:50.092 22:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.092 22:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.092 22:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.092 22:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:50.092 { 00:10:50.092 "cntlid": 21, 00:10:50.092 "qid": 0, 00:10:50.092 "state": "enabled", 00:10:50.092 "thread": "nvmf_tgt_poll_group_000", 00:10:50.092 "listen_address": { 00:10:50.092 "trtype": "TCP", 00:10:50.092 "adrfam": "IPv4", 00:10:50.092 "traddr": "10.0.0.2", 00:10:50.092 "trsvcid": "4420" 00:10:50.092 }, 00:10:50.092 "peer_address": { 00:10:50.092 "trtype": "TCP", 00:10:50.092 "adrfam": "IPv4", 00:10:50.092 "traddr": "10.0.0.1", 00:10:50.092 "trsvcid": "53746" 00:10:50.092 }, 00:10:50.092 "auth": { 00:10:50.092 "state": "completed", 00:10:50.092 "digest": "sha256", 00:10:50.092 "dhgroup": "ffdhe3072" 00:10:50.092 } 00:10:50.092 } 00:10:50.092 ]' 00:10:50.092 22:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:50.092 22:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:50.092 22:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:50.092 22:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:50.092 22:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:50.092 22:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:50.092 22:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:50.092 22:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:50.351 22:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:02:MTgxNTdjYmM1MTY5ZmVmNWQ1MTU1YjZhZGRhZmVmYzY2MWVhMDYxMjdjNjhkNTJip2Pi4g==: --dhchap-ctrl-secret DHHC-1:01:YjllZTEyOTNlMzY3NGM4NGZkNzBhYzQ1NTU4YmI3YTCoVxs0: 00:10:50.918 22:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.918 22:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:10:50.918 22:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.918 22:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.918 22:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.918 22:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:50.918 22:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:50.918 22:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:51.177 22:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:10:51.177 22:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:51.177 22:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:51.177 22:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:51.177 22:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:51.177 22:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:51.177 22:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key3 00:10:51.177 22:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.177 22:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.177 22:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.177 22:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:51.177 22:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:51.435 00:10:51.435 22:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:51.435 22:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.435 22:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:51.694 22:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.694 22:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.694 22:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.694 22:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.694 22:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.694 22:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:51.694 { 00:10:51.694 "cntlid": 23, 00:10:51.694 "qid": 0, 00:10:51.694 "state": "enabled", 00:10:51.694 "thread": "nvmf_tgt_poll_group_000", 00:10:51.694 "listen_address": { 00:10:51.694 "trtype": "TCP", 00:10:51.694 "adrfam": "IPv4", 00:10:51.694 "traddr": "10.0.0.2", 00:10:51.694 "trsvcid": "4420" 00:10:51.694 }, 00:10:51.694 "peer_address": { 00:10:51.694 "trtype": "TCP", 00:10:51.694 "adrfam": "IPv4", 00:10:51.694 "traddr": "10.0.0.1", 00:10:51.694 "trsvcid": "53776" 00:10:51.694 }, 00:10:51.694 "auth": { 00:10:51.694 "state": "completed", 00:10:51.694 "digest": "sha256", 00:10:51.694 "dhgroup": "ffdhe3072" 00:10:51.694 } 00:10:51.694 } 00:10:51.694 ]' 00:10:51.694 22:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:51.694 22:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:51.694 22:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:51.952 22:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:51.952 22:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:51.952 22:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.952 22:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.952 22:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:52.211 22:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:03:ZjljNjViNmY5OGY2MzU1MzQ4YzZhMWRlMTY2NzhmNmY0MjkyYjgyMzMyYTY3MTQ1MDY1MjAyMTBlYzMzYTQxZaD2MKQ=: 00:10:52.779 22:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.779 22:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:10:52.779 22:22:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.779 22:22:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.779 22:22:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.779 22:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:52.779 22:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:52.779 22:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:52.779 22:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:53.038 22:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:10:53.038 22:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:53.038 22:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:53.038 22:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:53.038 22:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:53.038 22:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:53.038 22:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.038 22:22:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.038 22:22:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.039 22:22:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.039 22:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.039 22:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.297 00:10:53.298 22:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:53.298 22:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:53.298 22:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.556 22:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.556 22:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.556 22:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.556 22:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.556 22:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.556 22:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:53.556 { 00:10:53.556 "cntlid": 25, 00:10:53.557 "qid": 0, 00:10:53.557 "state": "enabled", 00:10:53.557 "thread": "nvmf_tgt_poll_group_000", 00:10:53.557 "listen_address": { 00:10:53.557 "trtype": "TCP", 00:10:53.557 "adrfam": "IPv4", 00:10:53.557 "traddr": "10.0.0.2", 00:10:53.557 "trsvcid": "4420" 00:10:53.557 }, 00:10:53.557 "peer_address": { 00:10:53.557 "trtype": "TCP", 00:10:53.557 "adrfam": "IPv4", 00:10:53.557 "traddr": "10.0.0.1", 00:10:53.557 "trsvcid": "53812" 00:10:53.557 }, 00:10:53.557 "auth": { 00:10:53.557 "state": "completed", 00:10:53.557 "digest": "sha256", 00:10:53.557 "dhgroup": "ffdhe4096" 00:10:53.557 } 00:10:53.557 } 00:10:53.557 ]' 00:10:53.557 22:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:53.557 22:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:53.557 22:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:53.557 22:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:53.557 22:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:53.816 22:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.816 22:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.816 22:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:53.816 22:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:00:ZTA5ZDUzM2YyZGYzNjY1OTdhYzNiNGVhYTIwZjRjOTkwYTI5M2M1NDNmNjY5YmFkLnACuw==: --dhchap-ctrl-secret DHHC-1:03:MDNiOWQ2ZDFjZTBmY2IxNDc1ZjlmMWFlZDRlNDVkNGNkNmU4NzM0ZThlMGM0NjZiODQwYTNjNTA1MDQxMDY1Npgk4os=: 00:10:54.384 22:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.384 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:10:54.384 22:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.384 22:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.643 22:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.643 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:54.643 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:54.643 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:54.643 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:10:54.643 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:54.643 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:54.643 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:54.643 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:54.643 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.643 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.643 22:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.643 22:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.643 22:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.643 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.643 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.210 00:10:55.210 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:55.210 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:55.210 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:55.210 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:55.210 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:55.210 22:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.210 22:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.210 22:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.210 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:55.210 { 00:10:55.210 "cntlid": 27, 00:10:55.210 "qid": 0, 00:10:55.210 "state": "enabled", 00:10:55.210 "thread": "nvmf_tgt_poll_group_000", 00:10:55.210 "listen_address": { 00:10:55.210 "trtype": "TCP", 00:10:55.210 "adrfam": "IPv4", 00:10:55.210 "traddr": "10.0.0.2", 00:10:55.210 "trsvcid": "4420" 00:10:55.210 }, 00:10:55.210 "peer_address": { 00:10:55.210 "trtype": "TCP", 00:10:55.210 "adrfam": "IPv4", 00:10:55.210 "traddr": "10.0.0.1", 00:10:55.210 "trsvcid": "53850" 00:10:55.210 }, 00:10:55.210 "auth": { 00:10:55.210 "state": "completed", 00:10:55.210 "digest": "sha256", 00:10:55.210 "dhgroup": "ffdhe4096" 00:10:55.210 } 00:10:55.210 } 00:10:55.210 ]' 00:10:55.210 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:55.468 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:55.468 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:55.468 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:55.468 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:55.468 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.468 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.468 22:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.726 22:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:01:NTk2Y2FlMmI3ODg3MzAxYTU2NjJhODQyODBjMmNiYTj5Z2bJ: --dhchap-ctrl-secret DHHC-1:02:ZWUzMmJlMzgyMGVjYzE5NTliMGRkYjI4MTBiZjU1MjIyYzdhNDc4NDA3YzJlNDY0Dg5wyw==: 00:10:56.292 22:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:56.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:56.292 22:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:10:56.292 22:22:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.292 22:22:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.292 22:22:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.292 22:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:56.292 22:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:56.292 22:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:56.550 22:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:10:56.550 22:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:56.550 22:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:56.550 22:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:56.550 22:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:56.550 22:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.550 22:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.550 22:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.550 22:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.550 22:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.550 22:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.550 22:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.808 00:10:56.808 22:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:56.808 22:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:56.808 22:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.067 22:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.067 22:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.067 22:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.067 22:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.067 22:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.067 22:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:57.067 { 00:10:57.067 "cntlid": 29, 00:10:57.067 "qid": 0, 00:10:57.067 "state": "enabled", 00:10:57.067 "thread": "nvmf_tgt_poll_group_000", 00:10:57.067 "listen_address": { 00:10:57.067 "trtype": "TCP", 00:10:57.067 "adrfam": "IPv4", 00:10:57.067 "traddr": "10.0.0.2", 00:10:57.067 "trsvcid": "4420" 00:10:57.067 }, 00:10:57.067 "peer_address": { 00:10:57.067 "trtype": "TCP", 00:10:57.067 "adrfam": "IPv4", 00:10:57.067 "traddr": "10.0.0.1", 00:10:57.067 "trsvcid": "53890" 00:10:57.067 }, 00:10:57.067 "auth": { 00:10:57.067 "state": "completed", 00:10:57.067 "digest": "sha256", 00:10:57.067 "dhgroup": "ffdhe4096" 00:10:57.067 } 00:10:57.067 } 00:10:57.067 ]' 00:10:57.067 22:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:57.067 22:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:57.067 22:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:57.325 22:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:57.325 22:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:57.325 22:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.325 22:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.325 22:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.583 22:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:02:MTgxNTdjYmM1MTY5ZmVmNWQ1MTU1YjZhZGRhZmVmYzY2MWVhMDYxMjdjNjhkNTJip2Pi4g==: --dhchap-ctrl-secret DHHC-1:01:YjllZTEyOTNlMzY3NGM4NGZkNzBhYzQ1NTU4YmI3YTCoVxs0: 00:10:58.150 22:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:58.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:58.150 22:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:10:58.150 22:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.150 22:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.150 22:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.150 22:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:58.150 22:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:58.150 22:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:58.409 22:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:10:58.409 22:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:58.409 22:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:58.409 22:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:58.409 22:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:58.409 22:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:58.409 22:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key3 00:10:58.409 22:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.409 22:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.409 22:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.409 22:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:58.409 22:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:58.667 00:10:58.667 22:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:58.667 22:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:58.667 22:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.925 22:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.925 22:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.925 22:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.925 22:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.925 22:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.925 22:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:58.925 { 00:10:58.925 "cntlid": 31, 00:10:58.925 "qid": 0, 00:10:58.925 "state": "enabled", 00:10:58.925 "thread": "nvmf_tgt_poll_group_000", 00:10:58.925 "listen_address": { 00:10:58.925 "trtype": "TCP", 00:10:58.925 "adrfam": "IPv4", 00:10:58.925 "traddr": "10.0.0.2", 00:10:58.925 "trsvcid": "4420" 00:10:58.925 }, 00:10:58.925 "peer_address": { 00:10:58.925 "trtype": "TCP", 00:10:58.925 "adrfam": "IPv4", 00:10:58.925 "traddr": "10.0.0.1", 00:10:58.925 "trsvcid": "40302" 00:10:58.925 }, 00:10:58.925 "auth": { 00:10:58.925 "state": "completed", 00:10:58.925 "digest": "sha256", 00:10:58.925 "dhgroup": "ffdhe4096" 00:10:58.925 } 00:10:58.925 } 00:10:58.925 ]' 00:10:58.925 22:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:58.925 22:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:58.925 22:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:58.925 22:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:58.925 22:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:58.925 22:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.925 22:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.925 22:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.183 22:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:03:ZjljNjViNmY5OGY2MzU1MzQ4YzZhMWRlMTY2NzhmNmY0MjkyYjgyMzMyYTY3MTQ1MDY1MjAyMTBlYzMzYTQxZaD2MKQ=: 00:10:59.830 22:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:59.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:59.830 22:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:10:59.830 22:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.830 22:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.830 22:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.830 22:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:59.830 22:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:59.830 22:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:59.830 22:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:00.086 22:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:11:00.086 22:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:00.086 22:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:00.086 22:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:00.086 22:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:00.086 22:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.086 22:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.086 22:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.086 22:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.086 22:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.086 22:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.086 22:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.343 00:11:00.343 22:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:00.343 22:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.343 22:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:00.600 22:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.601 22:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.601 22:22:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.601 22:22:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.601 22:22:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.601 22:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:00.601 { 00:11:00.601 "cntlid": 33, 00:11:00.601 "qid": 0, 00:11:00.601 "state": "enabled", 00:11:00.601 "thread": "nvmf_tgt_poll_group_000", 00:11:00.601 "listen_address": { 00:11:00.601 "trtype": "TCP", 00:11:00.601 "adrfam": "IPv4", 00:11:00.601 "traddr": "10.0.0.2", 00:11:00.601 "trsvcid": "4420" 00:11:00.601 }, 00:11:00.601 "peer_address": { 00:11:00.601 "trtype": "TCP", 00:11:00.601 "adrfam": "IPv4", 00:11:00.601 "traddr": "10.0.0.1", 00:11:00.601 "trsvcid": "40326" 00:11:00.601 }, 00:11:00.601 "auth": { 00:11:00.601 "state": "completed", 00:11:00.601 "digest": "sha256", 00:11:00.601 "dhgroup": "ffdhe6144" 00:11:00.601 } 00:11:00.601 } 00:11:00.601 ]' 00:11:00.601 22:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:00.878 22:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:00.878 22:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:00.878 22:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:00.878 22:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:00.878 22:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.878 22:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.878 22:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.136 22:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:00:ZTA5ZDUzM2YyZGYzNjY1OTdhYzNiNGVhYTIwZjRjOTkwYTI5M2M1NDNmNjY5YmFkLnACuw==: --dhchap-ctrl-secret DHHC-1:03:MDNiOWQ2ZDFjZTBmY2IxNDc1ZjlmMWFlZDRlNDVkNGNkNmU4NzM0ZThlMGM0NjZiODQwYTNjNTA1MDQxMDY1Npgk4os=: 00:11:01.701 22:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.701 22:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:01.701 22:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.701 22:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.701 22:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.701 22:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:01.701 22:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:01.701 22:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:01.701 22:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:11:01.701 22:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:01.701 22:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:01.701 22:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:01.701 22:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:01.701 22:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.701 22:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.701 22:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.701 22:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.701 22:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.701 22:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.701 22:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.268 00:11:02.268 22:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:02.268 22:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:02.268 22:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.584 22:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.584 22:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.584 22:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.584 22:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.584 22:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.584 22:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:02.584 { 00:11:02.584 "cntlid": 35, 00:11:02.584 "qid": 0, 00:11:02.584 "state": "enabled", 00:11:02.584 "thread": "nvmf_tgt_poll_group_000", 00:11:02.584 "listen_address": { 00:11:02.584 "trtype": "TCP", 00:11:02.584 "adrfam": "IPv4", 00:11:02.584 "traddr": "10.0.0.2", 00:11:02.584 "trsvcid": "4420" 00:11:02.584 }, 00:11:02.584 "peer_address": { 00:11:02.584 "trtype": "TCP", 00:11:02.584 "adrfam": "IPv4", 00:11:02.584 "traddr": "10.0.0.1", 00:11:02.584 "trsvcid": "40348" 00:11:02.584 }, 00:11:02.584 "auth": { 00:11:02.584 "state": "completed", 00:11:02.584 "digest": "sha256", 00:11:02.584 "dhgroup": "ffdhe6144" 00:11:02.584 } 00:11:02.584 } 00:11:02.584 ]' 00:11:02.584 22:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:02.584 22:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:02.584 22:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:02.584 22:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:02.584 22:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:02.584 22:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.584 22:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.584 22:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.842 22:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:01:NTk2Y2FlMmI3ODg3MzAxYTU2NjJhODQyODBjMmNiYTj5Z2bJ: --dhchap-ctrl-secret DHHC-1:02:ZWUzMmJlMzgyMGVjYzE5NTliMGRkYjI4MTBiZjU1MjIyYzdhNDc4NDA3YzJlNDY0Dg5wyw==: 00:11:03.408 22:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.408 22:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:03.408 22:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.408 22:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.408 22:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.408 22:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:03.408 22:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:03.408 22:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:03.675 22:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:11:03.675 22:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:03.675 22:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:03.675 22:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:03.675 22:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:03.675 22:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:03.676 22:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.676 22:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.676 22:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.676 22:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.676 22:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.676 22:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.933 00:11:03.933 22:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:03.933 22:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.933 22:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:04.191 22:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.191 22:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.191 22:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.191 22:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.191 22:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.191 22:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:04.191 { 00:11:04.191 "cntlid": 37, 00:11:04.191 "qid": 0, 00:11:04.191 "state": "enabled", 00:11:04.191 "thread": "nvmf_tgt_poll_group_000", 00:11:04.191 "listen_address": { 00:11:04.191 "trtype": "TCP", 00:11:04.191 "adrfam": "IPv4", 00:11:04.191 "traddr": "10.0.0.2", 00:11:04.191 "trsvcid": "4420" 00:11:04.191 }, 00:11:04.191 "peer_address": { 00:11:04.191 "trtype": "TCP", 00:11:04.191 "adrfam": "IPv4", 00:11:04.191 "traddr": "10.0.0.1", 00:11:04.191 "trsvcid": "40382" 00:11:04.191 }, 00:11:04.191 "auth": { 00:11:04.192 "state": "completed", 00:11:04.192 "digest": "sha256", 00:11:04.192 "dhgroup": "ffdhe6144" 00:11:04.192 } 00:11:04.192 } 00:11:04.192 ]' 00:11:04.192 22:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:04.192 22:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:04.192 22:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:04.192 22:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:04.192 22:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:04.449 22:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.449 22:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.449 22:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.449 22:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:02:MTgxNTdjYmM1MTY5ZmVmNWQ1MTU1YjZhZGRhZmVmYzY2MWVhMDYxMjdjNjhkNTJip2Pi4g==: --dhchap-ctrl-secret DHHC-1:01:YjllZTEyOTNlMzY3NGM4NGZkNzBhYzQ1NTU4YmI3YTCoVxs0: 00:11:05.413 22:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.413 22:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:05.413 22:22:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.413 22:22:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.413 22:22:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.413 22:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:05.413 22:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:05.414 22:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:05.414 22:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:11:05.414 22:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:05.414 22:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:05.414 22:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:05.414 22:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:05.414 22:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.414 22:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key3 00:11:05.414 22:22:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.414 22:22:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.414 22:22:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.414 22:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:05.414 22:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:06.068 00:11:06.068 22:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:06.068 22:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:06.068 22:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.068 22:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.068 22:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.068 22:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.068 22:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.325 22:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.325 22:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:06.325 { 00:11:06.325 "cntlid": 39, 00:11:06.325 "qid": 0, 00:11:06.325 "state": "enabled", 00:11:06.325 "thread": "nvmf_tgt_poll_group_000", 00:11:06.325 "listen_address": { 00:11:06.325 "trtype": "TCP", 00:11:06.325 "adrfam": "IPv4", 00:11:06.325 "traddr": "10.0.0.2", 00:11:06.325 "trsvcid": "4420" 00:11:06.325 }, 00:11:06.325 "peer_address": { 00:11:06.325 "trtype": "TCP", 00:11:06.325 "adrfam": "IPv4", 00:11:06.325 "traddr": "10.0.0.1", 00:11:06.325 "trsvcid": "40404" 00:11:06.325 }, 00:11:06.325 "auth": { 00:11:06.325 "state": "completed", 00:11:06.325 "digest": "sha256", 00:11:06.325 "dhgroup": "ffdhe6144" 00:11:06.325 } 00:11:06.325 } 00:11:06.325 ]' 00:11:06.325 22:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:06.325 22:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:06.325 22:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:06.325 22:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:06.325 22:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:06.325 22:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.325 22:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.325 22:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.583 22:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:03:ZjljNjViNmY5OGY2MzU1MzQ4YzZhMWRlMTY2NzhmNmY0MjkyYjgyMzMyYTY3MTQ1MDY1MjAyMTBlYzMzYTQxZaD2MKQ=: 00:11:07.152 22:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:07.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:07.152 22:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:07.152 22:22:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.152 22:22:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.152 22:22:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.152 22:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:07.152 22:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:07.152 22:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:07.152 22:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:07.412 22:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:11:07.412 22:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:07.412 22:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:07.412 22:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:07.412 22:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:07.412 22:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.412 22:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.412 22:22:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.412 22:22:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.412 22:22:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.412 22:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.412 22:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.980 00:11:07.980 22:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:07.980 22:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:07.980 22:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.239 22:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.239 22:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.239 22:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.239 22:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.239 22:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.239 22:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:08.239 { 00:11:08.239 "cntlid": 41, 00:11:08.239 "qid": 0, 00:11:08.239 "state": "enabled", 00:11:08.239 "thread": "nvmf_tgt_poll_group_000", 00:11:08.239 "listen_address": { 00:11:08.239 "trtype": "TCP", 00:11:08.239 "adrfam": "IPv4", 00:11:08.239 "traddr": "10.0.0.2", 00:11:08.239 "trsvcid": "4420" 00:11:08.239 }, 00:11:08.239 "peer_address": { 00:11:08.239 "trtype": "TCP", 00:11:08.239 "adrfam": "IPv4", 00:11:08.239 "traddr": "10.0.0.1", 00:11:08.239 "trsvcid": "40434" 00:11:08.239 }, 00:11:08.239 "auth": { 00:11:08.239 "state": "completed", 00:11:08.239 "digest": "sha256", 00:11:08.239 "dhgroup": "ffdhe8192" 00:11:08.239 } 00:11:08.239 } 00:11:08.239 ]' 00:11:08.239 22:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:08.239 22:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:08.239 22:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:08.239 22:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:08.239 22:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:08.239 22:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.239 22:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.239 22:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.498 22:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:00:ZTA5ZDUzM2YyZGYzNjY1OTdhYzNiNGVhYTIwZjRjOTkwYTI5M2M1NDNmNjY5YmFkLnACuw==: --dhchap-ctrl-secret DHHC-1:03:MDNiOWQ2ZDFjZTBmY2IxNDc1ZjlmMWFlZDRlNDVkNGNkNmU4NzM0ZThlMGM0NjZiODQwYTNjNTA1MDQxMDY1Npgk4os=: 00:11:09.065 22:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.065 22:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:09.065 22:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.065 22:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.065 22:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.065 22:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:09.065 22:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:09.065 22:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:09.324 22:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:11:09.324 22:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:09.324 22:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:09.324 22:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:09.324 22:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:09.324 22:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.324 22:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:09.324 22:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.324 22:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.324 22:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.324 22:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:09.324 22:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:09.890 00:11:09.890 22:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:09.890 22:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:09.890 22:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.148 22:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.148 22:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.148 22:22:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.148 22:22:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.148 22:22:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.148 22:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:10.148 { 00:11:10.148 "cntlid": 43, 00:11:10.148 "qid": 0, 00:11:10.148 "state": "enabled", 00:11:10.148 "thread": "nvmf_tgt_poll_group_000", 00:11:10.148 "listen_address": { 00:11:10.148 "trtype": "TCP", 00:11:10.148 "adrfam": "IPv4", 00:11:10.148 "traddr": "10.0.0.2", 00:11:10.148 "trsvcid": "4420" 00:11:10.148 }, 00:11:10.148 "peer_address": { 00:11:10.148 "trtype": "TCP", 00:11:10.148 "adrfam": "IPv4", 00:11:10.148 "traddr": "10.0.0.1", 00:11:10.148 "trsvcid": "59814" 00:11:10.148 }, 00:11:10.148 "auth": { 00:11:10.148 "state": "completed", 00:11:10.148 "digest": "sha256", 00:11:10.148 "dhgroup": "ffdhe8192" 00:11:10.148 } 00:11:10.148 } 00:11:10.148 ]' 00:11:10.148 22:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:10.405 22:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:10.405 22:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:10.405 22:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:10.405 22:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:10.405 22:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.405 22:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.405 22:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:10.664 22:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:01:NTk2Y2FlMmI3ODg3MzAxYTU2NjJhODQyODBjMmNiYTj5Z2bJ: --dhchap-ctrl-secret DHHC-1:02:ZWUzMmJlMzgyMGVjYzE5NTliMGRkYjI4MTBiZjU1MjIyYzdhNDc4NDA3YzJlNDY0Dg5wyw==: 00:11:11.233 22:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.233 22:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:11.233 22:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.233 22:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.233 22:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.233 22:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:11.233 22:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:11.233 22:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:11.492 22:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:11:11.492 22:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:11.492 22:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:11.492 22:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:11.492 22:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:11.492 22:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.492 22:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:11.492 22:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.492 22:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.492 22:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.492 22:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:11.492 22:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.060 00:11:12.060 22:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:12.060 22:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:12.060 22:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.318 22:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.318 22:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.318 22:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.318 22:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.318 22:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.318 22:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:12.318 { 00:11:12.318 "cntlid": 45, 00:11:12.318 "qid": 0, 00:11:12.318 "state": "enabled", 00:11:12.318 "thread": "nvmf_tgt_poll_group_000", 00:11:12.318 "listen_address": { 00:11:12.318 "trtype": "TCP", 00:11:12.318 "adrfam": "IPv4", 00:11:12.318 "traddr": "10.0.0.2", 00:11:12.318 "trsvcid": "4420" 00:11:12.318 }, 00:11:12.318 "peer_address": { 00:11:12.318 "trtype": "TCP", 00:11:12.318 "adrfam": "IPv4", 00:11:12.318 "traddr": "10.0.0.1", 00:11:12.318 "trsvcid": "59834" 00:11:12.318 }, 00:11:12.318 "auth": { 00:11:12.318 "state": "completed", 00:11:12.318 "digest": "sha256", 00:11:12.318 "dhgroup": "ffdhe8192" 00:11:12.318 } 00:11:12.318 } 00:11:12.318 ]' 00:11:12.318 22:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:12.318 22:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:12.318 22:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:12.318 22:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:12.318 22:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:12.318 22:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.318 22:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.319 22:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.577 22:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:02:MTgxNTdjYmM1MTY5ZmVmNWQ1MTU1YjZhZGRhZmVmYzY2MWVhMDYxMjdjNjhkNTJip2Pi4g==: --dhchap-ctrl-secret DHHC-1:01:YjllZTEyOTNlMzY3NGM4NGZkNzBhYzQ1NTU4YmI3YTCoVxs0: 00:11:13.142 22:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.142 22:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:13.142 22:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.142 22:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.142 22:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.142 22:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:13.142 22:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:13.142 22:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:13.399 22:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:11:13.399 22:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:13.399 22:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:13.399 22:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:13.399 22:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:13.399 22:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.399 22:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key3 00:11:13.399 22:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.399 22:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.399 22:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.399 22:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:13.399 22:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:13.963 00:11:14.221 22:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:14.221 22:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.221 22:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:14.221 22:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.221 22:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.221 22:22:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.221 22:22:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.221 22:22:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.222 22:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:14.222 { 00:11:14.222 "cntlid": 47, 00:11:14.222 "qid": 0, 00:11:14.222 "state": "enabled", 00:11:14.222 "thread": "nvmf_tgt_poll_group_000", 00:11:14.222 "listen_address": { 00:11:14.222 "trtype": "TCP", 00:11:14.222 "adrfam": "IPv4", 00:11:14.222 "traddr": "10.0.0.2", 00:11:14.222 "trsvcid": "4420" 00:11:14.222 }, 00:11:14.222 "peer_address": { 00:11:14.222 "trtype": "TCP", 00:11:14.222 "adrfam": "IPv4", 00:11:14.222 "traddr": "10.0.0.1", 00:11:14.222 "trsvcid": "59874" 00:11:14.222 }, 00:11:14.222 "auth": { 00:11:14.222 "state": "completed", 00:11:14.222 "digest": "sha256", 00:11:14.222 "dhgroup": "ffdhe8192" 00:11:14.222 } 00:11:14.222 } 00:11:14.222 ]' 00:11:14.222 22:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:14.479 22:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:14.479 22:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:14.479 22:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:14.479 22:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:14.479 22:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.479 22:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.479 22:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.738 22:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:03:ZjljNjViNmY5OGY2MzU1MzQ4YzZhMWRlMTY2NzhmNmY0MjkyYjgyMzMyYTY3MTQ1MDY1MjAyMTBlYzMzYTQxZaD2MKQ=: 00:11:15.304 22:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.304 22:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:15.304 22:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.304 22:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.304 22:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.304 22:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:15.304 22:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:15.304 22:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:15.304 22:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:15.304 22:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:15.562 22:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:11:15.562 22:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:15.562 22:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:15.562 22:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:15.562 22:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:15.562 22:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.562 22:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.562 22:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.562 22:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.562 22:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.563 22:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.563 22:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.821 00:11:15.821 22:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:15.821 22:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:15.821 22:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.079 22:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.079 22:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.079 22:22:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.079 22:22:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.079 22:22:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.079 22:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:16.079 { 00:11:16.079 "cntlid": 49, 00:11:16.079 "qid": 0, 00:11:16.079 "state": "enabled", 00:11:16.079 "thread": "nvmf_tgt_poll_group_000", 00:11:16.079 "listen_address": { 00:11:16.079 "trtype": "TCP", 00:11:16.079 "adrfam": "IPv4", 00:11:16.079 "traddr": "10.0.0.2", 00:11:16.079 "trsvcid": "4420" 00:11:16.079 }, 00:11:16.079 "peer_address": { 00:11:16.079 "trtype": "TCP", 00:11:16.079 "adrfam": "IPv4", 00:11:16.079 "traddr": "10.0.0.1", 00:11:16.079 "trsvcid": "59896" 00:11:16.079 }, 00:11:16.079 "auth": { 00:11:16.079 "state": "completed", 00:11:16.079 "digest": "sha384", 00:11:16.079 "dhgroup": "null" 00:11:16.079 } 00:11:16.079 } 00:11:16.079 ]' 00:11:16.079 22:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:16.079 22:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:16.079 22:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:16.079 22:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:16.079 22:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:16.079 22:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.079 22:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.079 22:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.337 22:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:00:ZTA5ZDUzM2YyZGYzNjY1OTdhYzNiNGVhYTIwZjRjOTkwYTI5M2M1NDNmNjY5YmFkLnACuw==: --dhchap-ctrl-secret DHHC-1:03:MDNiOWQ2ZDFjZTBmY2IxNDc1ZjlmMWFlZDRlNDVkNGNkNmU4NzM0ZThlMGM0NjZiODQwYTNjNTA1MDQxMDY1Npgk4os=: 00:11:16.904 22:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.904 22:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:16.904 22:22:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.904 22:22:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.904 22:22:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.904 22:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:16.904 22:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:16.904 22:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:17.162 22:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:11:17.162 22:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:17.162 22:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:17.162 22:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:17.162 22:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:17.162 22:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.162 22:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.162 22:22:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.162 22:22:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.162 22:22:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.162 22:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.162 22:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.421 00:11:17.421 22:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:17.421 22:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:17.421 22:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.680 22:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.680 22:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.680 22:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.680 22:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.680 22:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.680 22:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:17.680 { 00:11:17.680 "cntlid": 51, 00:11:17.680 "qid": 0, 00:11:17.680 "state": "enabled", 00:11:17.680 "thread": "nvmf_tgt_poll_group_000", 00:11:17.680 "listen_address": { 00:11:17.680 "trtype": "TCP", 00:11:17.680 "adrfam": "IPv4", 00:11:17.680 "traddr": "10.0.0.2", 00:11:17.680 "trsvcid": "4420" 00:11:17.680 }, 00:11:17.680 "peer_address": { 00:11:17.680 "trtype": "TCP", 00:11:17.680 "adrfam": "IPv4", 00:11:17.680 "traddr": "10.0.0.1", 00:11:17.680 "trsvcid": "59918" 00:11:17.680 }, 00:11:17.680 "auth": { 00:11:17.680 "state": "completed", 00:11:17.680 "digest": "sha384", 00:11:17.680 "dhgroup": "null" 00:11:17.680 } 00:11:17.680 } 00:11:17.680 ]' 00:11:17.680 22:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:17.938 22:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:17.938 22:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:17.938 22:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:17.938 22:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:17.938 22:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.938 22:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.938 22:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.197 22:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:01:NTk2Y2FlMmI3ODg3MzAxYTU2NjJhODQyODBjMmNiYTj5Z2bJ: --dhchap-ctrl-secret DHHC-1:02:ZWUzMmJlMzgyMGVjYzE5NTliMGRkYjI4MTBiZjU1MjIyYzdhNDc4NDA3YzJlNDY0Dg5wyw==: 00:11:18.764 22:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.764 22:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:18.764 22:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.764 22:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.764 22:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.764 22:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:18.764 22:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:18.764 22:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:19.073 22:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:11:19.073 22:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:19.073 22:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:19.073 22:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:19.073 22:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:19.073 22:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:19.073 22:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:19.073 22:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.073 22:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.073 22:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.073 22:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:19.073 22:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:19.341 00:11:19.341 22:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:19.341 22:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:19.341 22:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.598 22:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.598 22:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.598 22:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.598 22:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.598 22:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.598 22:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:19.598 { 00:11:19.598 "cntlid": 53, 00:11:19.598 "qid": 0, 00:11:19.598 "state": "enabled", 00:11:19.598 "thread": "nvmf_tgt_poll_group_000", 00:11:19.598 "listen_address": { 00:11:19.598 "trtype": "TCP", 00:11:19.598 "adrfam": "IPv4", 00:11:19.598 "traddr": "10.0.0.2", 00:11:19.598 "trsvcid": "4420" 00:11:19.598 }, 00:11:19.598 "peer_address": { 00:11:19.598 "trtype": "TCP", 00:11:19.598 "adrfam": "IPv4", 00:11:19.598 "traddr": "10.0.0.1", 00:11:19.598 "trsvcid": "36222" 00:11:19.598 }, 00:11:19.598 "auth": { 00:11:19.598 "state": "completed", 00:11:19.598 "digest": "sha384", 00:11:19.598 "dhgroup": "null" 00:11:19.598 } 00:11:19.598 } 00:11:19.598 ]' 00:11:19.598 22:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:19.598 22:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:19.598 22:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:19.598 22:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:19.598 22:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:19.598 22:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.598 22:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.598 22:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.854 22:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:02:MTgxNTdjYmM1MTY5ZmVmNWQ1MTU1YjZhZGRhZmVmYzY2MWVhMDYxMjdjNjhkNTJip2Pi4g==: --dhchap-ctrl-secret DHHC-1:01:YjllZTEyOTNlMzY3NGM4NGZkNzBhYzQ1NTU4YmI3YTCoVxs0: 00:11:20.418 22:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.418 22:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:20.418 22:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.418 22:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.418 22:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.418 22:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:20.418 22:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:20.418 22:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:20.675 22:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:11:20.675 22:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:20.675 22:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:20.675 22:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:20.675 22:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:20.675 22:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.675 22:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key3 00:11:20.675 22:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.675 22:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.675 22:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.675 22:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:20.675 22:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:21.255 00:11:21.255 22:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:21.255 22:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.255 22:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:21.255 22:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.255 22:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.255 22:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.255 22:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.513 22:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.513 22:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:21.513 { 00:11:21.513 "cntlid": 55, 00:11:21.513 "qid": 0, 00:11:21.513 "state": "enabled", 00:11:21.513 "thread": "nvmf_tgt_poll_group_000", 00:11:21.513 "listen_address": { 00:11:21.513 "trtype": "TCP", 00:11:21.513 "adrfam": "IPv4", 00:11:21.513 "traddr": "10.0.0.2", 00:11:21.513 "trsvcid": "4420" 00:11:21.513 }, 00:11:21.513 "peer_address": { 00:11:21.513 "trtype": "TCP", 00:11:21.513 "adrfam": "IPv4", 00:11:21.513 "traddr": "10.0.0.1", 00:11:21.513 "trsvcid": "36256" 00:11:21.513 }, 00:11:21.513 "auth": { 00:11:21.513 "state": "completed", 00:11:21.513 "digest": "sha384", 00:11:21.513 "dhgroup": "null" 00:11:21.513 } 00:11:21.513 } 00:11:21.513 ]' 00:11:21.513 22:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:21.513 22:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:21.513 22:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:21.513 22:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:21.513 22:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:21.513 22:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.513 22:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.513 22:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.772 22:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:03:ZjljNjViNmY5OGY2MzU1MzQ4YzZhMWRlMTY2NzhmNmY0MjkyYjgyMzMyYTY3MTQ1MDY1MjAyMTBlYzMzYTQxZaD2MKQ=: 00:11:22.335 22:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.335 22:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:22.335 22:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.335 22:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.335 22:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.335 22:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:22.335 22:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:22.335 22:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:22.335 22:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:22.591 22:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:11:22.592 22:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:22.592 22:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:22.592 22:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:22.592 22:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:22.592 22:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.592 22:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:22.592 22:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.592 22:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.592 22:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.592 22:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:22.592 22:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:22.849 00:11:22.849 22:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:22.849 22:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:22.849 22:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.106 22:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.106 22:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.106 22:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.106 22:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.106 22:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.106 22:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:23.106 { 00:11:23.106 "cntlid": 57, 00:11:23.106 "qid": 0, 00:11:23.106 "state": "enabled", 00:11:23.106 "thread": "nvmf_tgt_poll_group_000", 00:11:23.106 "listen_address": { 00:11:23.106 "trtype": "TCP", 00:11:23.106 "adrfam": "IPv4", 00:11:23.106 "traddr": "10.0.0.2", 00:11:23.106 "trsvcid": "4420" 00:11:23.106 }, 00:11:23.106 "peer_address": { 00:11:23.106 "trtype": "TCP", 00:11:23.106 "adrfam": "IPv4", 00:11:23.106 "traddr": "10.0.0.1", 00:11:23.106 "trsvcid": "36278" 00:11:23.106 }, 00:11:23.106 "auth": { 00:11:23.106 "state": "completed", 00:11:23.106 "digest": "sha384", 00:11:23.106 "dhgroup": "ffdhe2048" 00:11:23.106 } 00:11:23.106 } 00:11:23.106 ]' 00:11:23.106 22:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:23.106 22:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:23.106 22:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:23.106 22:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:23.106 22:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:23.106 22:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.107 22:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.107 22:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.364 22:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:00:ZTA5ZDUzM2YyZGYzNjY1OTdhYzNiNGVhYTIwZjRjOTkwYTI5M2M1NDNmNjY5YmFkLnACuw==: --dhchap-ctrl-secret DHHC-1:03:MDNiOWQ2ZDFjZTBmY2IxNDc1ZjlmMWFlZDRlNDVkNGNkNmU4NzM0ZThlMGM0NjZiODQwYTNjNTA1MDQxMDY1Npgk4os=: 00:11:23.930 22:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.930 22:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:23.930 22:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.930 22:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.930 22:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.930 22:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:23.930 22:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:23.930 22:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:24.188 22:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:11:24.188 22:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:24.188 22:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:24.188 22:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:24.188 22:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:24.188 22:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.188 22:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.188 22:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.188 22:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.188 22:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.188 22:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.188 22:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.447 00:11:24.447 22:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:24.447 22:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.447 22:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:24.705 22:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.705 22:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.705 22:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.705 22:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.705 22:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.705 22:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:24.705 { 00:11:24.705 "cntlid": 59, 00:11:24.705 "qid": 0, 00:11:24.705 "state": "enabled", 00:11:24.705 "thread": "nvmf_tgt_poll_group_000", 00:11:24.705 "listen_address": { 00:11:24.705 "trtype": "TCP", 00:11:24.705 "adrfam": "IPv4", 00:11:24.705 "traddr": "10.0.0.2", 00:11:24.705 "trsvcid": "4420" 00:11:24.705 }, 00:11:24.705 "peer_address": { 00:11:24.705 "trtype": "TCP", 00:11:24.705 "adrfam": "IPv4", 00:11:24.705 "traddr": "10.0.0.1", 00:11:24.705 "trsvcid": "36314" 00:11:24.705 }, 00:11:24.705 "auth": { 00:11:24.706 "state": "completed", 00:11:24.706 "digest": "sha384", 00:11:24.706 "dhgroup": "ffdhe2048" 00:11:24.706 } 00:11:24.706 } 00:11:24.706 ]' 00:11:24.706 22:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:24.964 22:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:24.964 22:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:24.964 22:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:24.964 22:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:24.964 22:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.964 22:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.964 22:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.222 22:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:01:NTk2Y2FlMmI3ODg3MzAxYTU2NjJhODQyODBjMmNiYTj5Z2bJ: --dhchap-ctrl-secret DHHC-1:02:ZWUzMmJlMzgyMGVjYzE5NTliMGRkYjI4MTBiZjU1MjIyYzdhNDc4NDA3YzJlNDY0Dg5wyw==: 00:11:25.789 22:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.789 22:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:25.789 22:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.789 22:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.789 22:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.789 22:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:25.789 22:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:25.789 22:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:26.048 22:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:11:26.048 22:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:26.048 22:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:26.048 22:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:26.048 22:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:26.048 22:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.048 22:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.048 22:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.048 22:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.048 22:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.048 22:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.048 22:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.307 00:11:26.307 22:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:26.307 22:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.307 22:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:26.565 22:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.566 22:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.566 22:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.566 22:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.566 22:22:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.566 22:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:26.566 { 00:11:26.566 "cntlid": 61, 00:11:26.566 "qid": 0, 00:11:26.566 "state": "enabled", 00:11:26.566 "thread": "nvmf_tgt_poll_group_000", 00:11:26.566 "listen_address": { 00:11:26.566 "trtype": "TCP", 00:11:26.566 "adrfam": "IPv4", 00:11:26.566 "traddr": "10.0.0.2", 00:11:26.566 "trsvcid": "4420" 00:11:26.566 }, 00:11:26.566 "peer_address": { 00:11:26.566 "trtype": "TCP", 00:11:26.566 "adrfam": "IPv4", 00:11:26.566 "traddr": "10.0.0.1", 00:11:26.566 "trsvcid": "36332" 00:11:26.566 }, 00:11:26.566 "auth": { 00:11:26.566 "state": "completed", 00:11:26.566 "digest": "sha384", 00:11:26.566 "dhgroup": "ffdhe2048" 00:11:26.566 } 00:11:26.566 } 00:11:26.566 ]' 00:11:26.566 22:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:26.566 22:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:26.566 22:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:26.566 22:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:26.566 22:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:26.566 22:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.566 22:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.566 22:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.823 22:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:02:MTgxNTdjYmM1MTY5ZmVmNWQ1MTU1YjZhZGRhZmVmYzY2MWVhMDYxMjdjNjhkNTJip2Pi4g==: --dhchap-ctrl-secret DHHC-1:01:YjllZTEyOTNlMzY3NGM4NGZkNzBhYzQ1NTU4YmI3YTCoVxs0: 00:11:27.391 22:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.391 22:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:27.391 22:22:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.391 22:22:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.391 22:22:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.391 22:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:27.391 22:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:27.391 22:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:27.649 22:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:11:27.649 22:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:27.649 22:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:27.649 22:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:27.649 22:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:27.649 22:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.649 22:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key3 00:11:27.649 22:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.649 22:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.649 22:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.649 22:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:27.649 22:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:27.906 00:11:27.906 22:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:27.906 22:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:27.906 22:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.165 22:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.165 22:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.165 22:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.166 22:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.166 22:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.166 22:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:28.166 { 00:11:28.166 "cntlid": 63, 00:11:28.166 "qid": 0, 00:11:28.166 "state": "enabled", 00:11:28.166 "thread": "nvmf_tgt_poll_group_000", 00:11:28.166 "listen_address": { 00:11:28.166 "trtype": "TCP", 00:11:28.166 "adrfam": "IPv4", 00:11:28.166 "traddr": "10.0.0.2", 00:11:28.166 "trsvcid": "4420" 00:11:28.166 }, 00:11:28.166 "peer_address": { 00:11:28.166 "trtype": "TCP", 00:11:28.166 "adrfam": "IPv4", 00:11:28.166 "traddr": "10.0.0.1", 00:11:28.166 "trsvcid": "36364" 00:11:28.166 }, 00:11:28.166 "auth": { 00:11:28.166 "state": "completed", 00:11:28.166 "digest": "sha384", 00:11:28.166 "dhgroup": "ffdhe2048" 00:11:28.166 } 00:11:28.166 } 00:11:28.166 ]' 00:11:28.166 22:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:28.166 22:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:28.166 22:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:28.166 22:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:28.166 22:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:28.423 22:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.423 22:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.423 22:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.423 22:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:03:ZjljNjViNmY5OGY2MzU1MzQ4YzZhMWRlMTY2NzhmNmY0MjkyYjgyMzMyYTY3MTQ1MDY1MjAyMTBlYzMzYTQxZaD2MKQ=: 00:11:29.362 22:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.362 22:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:29.362 22:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.362 22:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.362 22:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.362 22:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:29.362 22:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:29.362 22:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:29.362 22:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:29.362 22:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:11:29.362 22:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:29.362 22:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:29.362 22:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:29.362 22:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:29.362 22:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.362 22:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.362 22:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.362 22:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.362 22:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.362 22:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.362 22:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.622 00:11:29.622 22:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:29.622 22:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.622 22:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:29.880 22:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.880 22:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.880 22:22:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.880 22:22:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.880 22:22:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.880 22:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:29.880 { 00:11:29.880 "cntlid": 65, 00:11:29.880 "qid": 0, 00:11:29.880 "state": "enabled", 00:11:29.880 "thread": "nvmf_tgt_poll_group_000", 00:11:29.880 "listen_address": { 00:11:29.880 "trtype": "TCP", 00:11:29.880 "adrfam": "IPv4", 00:11:29.880 "traddr": "10.0.0.2", 00:11:29.880 "trsvcid": "4420" 00:11:29.880 }, 00:11:29.880 "peer_address": { 00:11:29.880 "trtype": "TCP", 00:11:29.880 "adrfam": "IPv4", 00:11:29.880 "traddr": "10.0.0.1", 00:11:29.880 "trsvcid": "57698" 00:11:29.880 }, 00:11:29.880 "auth": { 00:11:29.880 "state": "completed", 00:11:29.880 "digest": "sha384", 00:11:29.880 "dhgroup": "ffdhe3072" 00:11:29.880 } 00:11:29.880 } 00:11:29.880 ]' 00:11:29.880 22:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:29.880 22:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:29.880 22:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:30.138 22:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:30.138 22:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:30.138 22:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.138 22:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.138 22:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.138 22:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:00:ZTA5ZDUzM2YyZGYzNjY1OTdhYzNiNGVhYTIwZjRjOTkwYTI5M2M1NDNmNjY5YmFkLnACuw==: --dhchap-ctrl-secret DHHC-1:03:MDNiOWQ2ZDFjZTBmY2IxNDc1ZjlmMWFlZDRlNDVkNGNkNmU4NzM0ZThlMGM0NjZiODQwYTNjNTA1MDQxMDY1Npgk4os=: 00:11:31.073 22:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.073 22:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:31.073 22:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.073 22:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.073 22:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.073 22:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:31.073 22:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:31.073 22:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:31.073 22:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:11:31.073 22:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:31.073 22:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:31.073 22:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:31.073 22:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:31.073 22:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.073 22:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.073 22:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.073 22:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.073 22:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.073 22:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.073 22:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.332 00:11:31.332 22:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:31.332 22:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:31.332 22:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.590 22:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.591 22:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.591 22:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.591 22:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.591 22:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.591 22:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:31.591 { 00:11:31.591 "cntlid": 67, 00:11:31.591 "qid": 0, 00:11:31.591 "state": "enabled", 00:11:31.591 "thread": "nvmf_tgt_poll_group_000", 00:11:31.591 "listen_address": { 00:11:31.591 "trtype": "TCP", 00:11:31.591 "adrfam": "IPv4", 00:11:31.591 "traddr": "10.0.0.2", 00:11:31.591 "trsvcid": "4420" 00:11:31.591 }, 00:11:31.591 "peer_address": { 00:11:31.591 "trtype": "TCP", 00:11:31.591 "adrfam": "IPv4", 00:11:31.591 "traddr": "10.0.0.1", 00:11:31.591 "trsvcid": "57724" 00:11:31.591 }, 00:11:31.591 "auth": { 00:11:31.591 "state": "completed", 00:11:31.591 "digest": "sha384", 00:11:31.591 "dhgroup": "ffdhe3072" 00:11:31.591 } 00:11:31.591 } 00:11:31.591 ]' 00:11:31.591 22:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:31.591 22:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:31.591 22:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:31.591 22:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:31.591 22:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:31.849 22:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.849 22:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.849 22:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.106 22:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:01:NTk2Y2FlMmI3ODg3MzAxYTU2NjJhODQyODBjMmNiYTj5Z2bJ: --dhchap-ctrl-secret DHHC-1:02:ZWUzMmJlMzgyMGVjYzE5NTliMGRkYjI4MTBiZjU1MjIyYzdhNDc4NDA3YzJlNDY0Dg5wyw==: 00:11:32.699 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.699 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:32.699 22:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.699 22:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.699 22:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.699 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:32.699 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:32.699 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:32.699 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:11:32.699 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:32.699 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:32.699 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:32.699 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:32.699 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.699 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:32.699 22:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.699 22:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.699 22:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.699 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:32.699 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:32.965 00:11:33.223 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:33.223 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.223 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:33.223 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.223 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.223 22:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.223 22:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.223 22:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.223 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:33.223 { 00:11:33.223 "cntlid": 69, 00:11:33.223 "qid": 0, 00:11:33.223 "state": "enabled", 00:11:33.223 "thread": "nvmf_tgt_poll_group_000", 00:11:33.223 "listen_address": { 00:11:33.223 "trtype": "TCP", 00:11:33.223 "adrfam": "IPv4", 00:11:33.223 "traddr": "10.0.0.2", 00:11:33.223 "trsvcid": "4420" 00:11:33.223 }, 00:11:33.223 "peer_address": { 00:11:33.223 "trtype": "TCP", 00:11:33.223 "adrfam": "IPv4", 00:11:33.223 "traddr": "10.0.0.1", 00:11:33.223 "trsvcid": "57736" 00:11:33.223 }, 00:11:33.223 "auth": { 00:11:33.223 "state": "completed", 00:11:33.223 "digest": "sha384", 00:11:33.223 "dhgroup": "ffdhe3072" 00:11:33.223 } 00:11:33.223 } 00:11:33.223 ]' 00:11:33.223 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:33.481 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:33.481 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:33.481 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:33.481 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:33.481 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.481 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.481 22:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.739 22:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:02:MTgxNTdjYmM1MTY5ZmVmNWQ1MTU1YjZhZGRhZmVmYzY2MWVhMDYxMjdjNjhkNTJip2Pi4g==: --dhchap-ctrl-secret DHHC-1:01:YjllZTEyOTNlMzY3NGM4NGZkNzBhYzQ1NTU4YmI3YTCoVxs0: 00:11:34.305 22:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.305 22:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:34.305 22:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.305 22:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.305 22:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.305 22:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:34.305 22:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:34.305 22:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:34.563 22:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:11:34.563 22:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:34.563 22:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:34.563 22:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:34.563 22:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:34.563 22:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.563 22:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key3 00:11:34.563 22:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.563 22:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.563 22:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.563 22:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:34.563 22:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:34.820 00:11:34.820 22:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:34.820 22:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:34.820 22:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.077 22:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.077 22:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.077 22:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.077 22:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.077 22:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.077 22:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:35.077 { 00:11:35.077 "cntlid": 71, 00:11:35.077 "qid": 0, 00:11:35.077 "state": "enabled", 00:11:35.077 "thread": "nvmf_tgt_poll_group_000", 00:11:35.077 "listen_address": { 00:11:35.077 "trtype": "TCP", 00:11:35.077 "adrfam": "IPv4", 00:11:35.077 "traddr": "10.0.0.2", 00:11:35.077 "trsvcid": "4420" 00:11:35.077 }, 00:11:35.077 "peer_address": { 00:11:35.077 "trtype": "TCP", 00:11:35.077 "adrfam": "IPv4", 00:11:35.077 "traddr": "10.0.0.1", 00:11:35.077 "trsvcid": "57756" 00:11:35.077 }, 00:11:35.077 "auth": { 00:11:35.077 "state": "completed", 00:11:35.077 "digest": "sha384", 00:11:35.077 "dhgroup": "ffdhe3072" 00:11:35.077 } 00:11:35.077 } 00:11:35.077 ]' 00:11:35.077 22:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:35.077 22:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:35.077 22:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:35.077 22:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:35.077 22:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:35.335 22:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.335 22:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.335 22:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.593 22:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:03:ZjljNjViNmY5OGY2MzU1MzQ4YzZhMWRlMTY2NzhmNmY0MjkyYjgyMzMyYTY3MTQ1MDY1MjAyMTBlYzMzYTQxZaD2MKQ=: 00:11:36.158 22:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.158 22:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:36.158 22:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.158 22:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.158 22:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.158 22:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:36.158 22:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:36.158 22:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:36.158 22:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:36.417 22:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:11:36.417 22:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:36.417 22:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:36.417 22:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:36.417 22:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:36.417 22:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.417 22:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.417 22:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.417 22:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.417 22:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.418 22:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.418 22:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.677 00:11:36.677 22:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:36.677 22:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:36.677 22:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.936 22:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.936 22:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.936 22:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.936 22:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.936 22:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.936 22:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:36.936 { 00:11:36.936 "cntlid": 73, 00:11:36.936 "qid": 0, 00:11:36.936 "state": "enabled", 00:11:36.936 "thread": "nvmf_tgt_poll_group_000", 00:11:36.936 "listen_address": { 00:11:36.936 "trtype": "TCP", 00:11:36.936 "adrfam": "IPv4", 00:11:36.936 "traddr": "10.0.0.2", 00:11:36.936 "trsvcid": "4420" 00:11:36.936 }, 00:11:36.936 "peer_address": { 00:11:36.936 "trtype": "TCP", 00:11:36.936 "adrfam": "IPv4", 00:11:36.936 "traddr": "10.0.0.1", 00:11:36.936 "trsvcid": "57788" 00:11:36.936 }, 00:11:36.936 "auth": { 00:11:36.936 "state": "completed", 00:11:36.936 "digest": "sha384", 00:11:36.936 "dhgroup": "ffdhe4096" 00:11:36.936 } 00:11:36.936 } 00:11:36.936 ]' 00:11:36.936 22:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:36.936 22:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:36.936 22:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:36.936 22:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:36.936 22:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:36.936 22:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.936 22:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.936 22:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.195 22:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:00:ZTA5ZDUzM2YyZGYzNjY1OTdhYzNiNGVhYTIwZjRjOTkwYTI5M2M1NDNmNjY5YmFkLnACuw==: --dhchap-ctrl-secret DHHC-1:03:MDNiOWQ2ZDFjZTBmY2IxNDc1ZjlmMWFlZDRlNDVkNGNkNmU4NzM0ZThlMGM0NjZiODQwYTNjNTA1MDQxMDY1Npgk4os=: 00:11:37.763 22:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.763 22:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:37.763 22:22:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.763 22:22:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.763 22:22:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.763 22:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:37.763 22:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:37.763 22:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:38.043 22:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:11:38.043 22:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:38.043 22:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:38.043 22:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:38.043 22:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:38.043 22:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.043 22:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.043 22:22:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.043 22:22:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.043 22:22:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.043 22:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.043 22:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.302 00:11:38.302 22:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:38.302 22:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.302 22:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:38.561 22:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.561 22:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.561 22:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.561 22:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.561 22:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.561 22:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:38.561 { 00:11:38.561 "cntlid": 75, 00:11:38.561 "qid": 0, 00:11:38.561 "state": "enabled", 00:11:38.561 "thread": "nvmf_tgt_poll_group_000", 00:11:38.561 "listen_address": { 00:11:38.561 "trtype": "TCP", 00:11:38.561 "adrfam": "IPv4", 00:11:38.561 "traddr": "10.0.0.2", 00:11:38.561 "trsvcid": "4420" 00:11:38.561 }, 00:11:38.561 "peer_address": { 00:11:38.561 "trtype": "TCP", 00:11:38.561 "adrfam": "IPv4", 00:11:38.561 "traddr": "10.0.0.1", 00:11:38.561 "trsvcid": "56090" 00:11:38.561 }, 00:11:38.561 "auth": { 00:11:38.561 "state": "completed", 00:11:38.561 "digest": "sha384", 00:11:38.561 "dhgroup": "ffdhe4096" 00:11:38.561 } 00:11:38.561 } 00:11:38.561 ]' 00:11:38.561 22:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:38.561 22:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:38.561 22:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:38.561 22:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:38.561 22:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:38.820 22:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.820 22:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.820 22:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.820 22:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:01:NTk2Y2FlMmI3ODg3MzAxYTU2NjJhODQyODBjMmNiYTj5Z2bJ: --dhchap-ctrl-secret DHHC-1:02:ZWUzMmJlMzgyMGVjYzE5NTliMGRkYjI4MTBiZjU1MjIyYzdhNDc4NDA3YzJlNDY0Dg5wyw==: 00:11:39.754 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.754 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:39.754 22:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.754 22:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.754 22:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.754 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:39.754 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:39.754 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:39.754 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:11:39.754 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:39.754 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:39.754 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:39.754 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:39.754 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.754 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.754 22:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.754 22:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.754 22:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.754 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.754 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.012 00:11:40.012 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:40.012 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.012 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:40.270 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.270 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.270 22:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.270 22:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.270 22:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.271 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:40.271 { 00:11:40.271 "cntlid": 77, 00:11:40.271 "qid": 0, 00:11:40.271 "state": "enabled", 00:11:40.271 "thread": "nvmf_tgt_poll_group_000", 00:11:40.271 "listen_address": { 00:11:40.271 "trtype": "TCP", 00:11:40.271 "adrfam": "IPv4", 00:11:40.271 "traddr": "10.0.0.2", 00:11:40.271 "trsvcid": "4420" 00:11:40.271 }, 00:11:40.271 "peer_address": { 00:11:40.271 "trtype": "TCP", 00:11:40.271 "adrfam": "IPv4", 00:11:40.271 "traddr": "10.0.0.1", 00:11:40.271 "trsvcid": "56108" 00:11:40.271 }, 00:11:40.271 "auth": { 00:11:40.271 "state": "completed", 00:11:40.271 "digest": "sha384", 00:11:40.271 "dhgroup": "ffdhe4096" 00:11:40.271 } 00:11:40.271 } 00:11:40.271 ]' 00:11:40.271 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:40.271 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:40.271 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:40.528 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:40.528 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:40.528 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.528 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.528 22:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.792 22:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:02:MTgxNTdjYmM1MTY5ZmVmNWQ1MTU1YjZhZGRhZmVmYzY2MWVhMDYxMjdjNjhkNTJip2Pi4g==: --dhchap-ctrl-secret DHHC-1:01:YjllZTEyOTNlMzY3NGM4NGZkNzBhYzQ1NTU4YmI3YTCoVxs0: 00:11:41.359 22:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.359 22:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:41.359 22:22:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.359 22:22:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.359 22:22:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.359 22:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:41.359 22:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:41.359 22:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:41.359 22:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:11:41.359 22:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:41.359 22:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:41.359 22:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:41.359 22:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:41.359 22:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.359 22:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key3 00:11:41.359 22:22:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.359 22:22:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.616 22:22:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.616 22:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:41.616 22:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:41.872 00:11:41.872 22:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:41.872 22:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:41.872 22:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.129 22:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.129 22:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.129 22:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.129 22:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.129 22:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.129 22:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:42.129 { 00:11:42.129 "cntlid": 79, 00:11:42.129 "qid": 0, 00:11:42.129 "state": "enabled", 00:11:42.129 "thread": "nvmf_tgt_poll_group_000", 00:11:42.129 "listen_address": { 00:11:42.129 "trtype": "TCP", 00:11:42.129 "adrfam": "IPv4", 00:11:42.129 "traddr": "10.0.0.2", 00:11:42.129 "trsvcid": "4420" 00:11:42.129 }, 00:11:42.129 "peer_address": { 00:11:42.129 "trtype": "TCP", 00:11:42.129 "adrfam": "IPv4", 00:11:42.129 "traddr": "10.0.0.1", 00:11:42.129 "trsvcid": "56132" 00:11:42.129 }, 00:11:42.129 "auth": { 00:11:42.129 "state": "completed", 00:11:42.129 "digest": "sha384", 00:11:42.129 "dhgroup": "ffdhe4096" 00:11:42.129 } 00:11:42.129 } 00:11:42.129 ]' 00:11:42.129 22:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:42.129 22:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:42.129 22:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:42.129 22:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:42.129 22:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:42.129 22:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.129 22:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.129 22:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.386 22:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:03:ZjljNjViNmY5OGY2MzU1MzQ4YzZhMWRlMTY2NzhmNmY0MjkyYjgyMzMyYTY3MTQ1MDY1MjAyMTBlYzMzYTQxZaD2MKQ=: 00:11:42.949 22:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.949 22:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:42.950 22:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.950 22:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.950 22:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.950 22:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:42.950 22:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:42.950 22:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:42.950 22:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:43.207 22:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:11:43.207 22:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:43.207 22:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:43.207 22:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:43.207 22:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:43.207 22:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.207 22:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.207 22:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.207 22:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.207 22:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.207 22:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.207 22:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.505 00:11:43.505 22:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:43.505 22:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:43.505 22:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.776 22:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.776 22:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.776 22:22:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.776 22:22:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.776 22:22:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.776 22:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:43.776 { 00:11:43.776 "cntlid": 81, 00:11:43.776 "qid": 0, 00:11:43.776 "state": "enabled", 00:11:43.776 "thread": "nvmf_tgt_poll_group_000", 00:11:43.776 "listen_address": { 00:11:43.776 "trtype": "TCP", 00:11:43.776 "adrfam": "IPv4", 00:11:43.776 "traddr": "10.0.0.2", 00:11:43.776 "trsvcid": "4420" 00:11:43.776 }, 00:11:43.776 "peer_address": { 00:11:43.776 "trtype": "TCP", 00:11:43.776 "adrfam": "IPv4", 00:11:43.776 "traddr": "10.0.0.1", 00:11:43.776 "trsvcid": "56156" 00:11:43.776 }, 00:11:43.776 "auth": { 00:11:43.776 "state": "completed", 00:11:43.776 "digest": "sha384", 00:11:43.776 "dhgroup": "ffdhe6144" 00:11:43.776 } 00:11:43.776 } 00:11:43.776 ]' 00:11:43.776 22:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:43.776 22:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:43.776 22:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:43.776 22:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:43.776 22:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:44.036 22:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.036 22:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.036 22:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.036 22:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:00:ZTA5ZDUzM2YyZGYzNjY1OTdhYzNiNGVhYTIwZjRjOTkwYTI5M2M1NDNmNjY5YmFkLnACuw==: --dhchap-ctrl-secret DHHC-1:03:MDNiOWQ2ZDFjZTBmY2IxNDc1ZjlmMWFlZDRlNDVkNGNkNmU4NzM0ZThlMGM0NjZiODQwYTNjNTA1MDQxMDY1Npgk4os=: 00:11:44.603 22:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.603 22:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:44.603 22:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.603 22:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.861 22:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.861 22:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:44.861 22:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:44.861 22:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:44.861 22:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:11:44.861 22:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:44.861 22:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:44.861 22:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:44.861 22:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:44.861 22:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.861 22:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.861 22:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.861 22:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.861 22:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.861 22:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.861 22:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.429 00:11:45.429 22:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:45.429 22:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.429 22:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:45.687 22:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.687 22:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.687 22:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.687 22:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.687 22:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.687 22:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:45.687 { 00:11:45.687 "cntlid": 83, 00:11:45.687 "qid": 0, 00:11:45.687 "state": "enabled", 00:11:45.687 "thread": "nvmf_tgt_poll_group_000", 00:11:45.687 "listen_address": { 00:11:45.687 "trtype": "TCP", 00:11:45.687 "adrfam": "IPv4", 00:11:45.687 "traddr": "10.0.0.2", 00:11:45.687 "trsvcid": "4420" 00:11:45.687 }, 00:11:45.687 "peer_address": { 00:11:45.687 "trtype": "TCP", 00:11:45.687 "adrfam": "IPv4", 00:11:45.687 "traddr": "10.0.0.1", 00:11:45.687 "trsvcid": "56180" 00:11:45.687 }, 00:11:45.687 "auth": { 00:11:45.687 "state": "completed", 00:11:45.687 "digest": "sha384", 00:11:45.687 "dhgroup": "ffdhe6144" 00:11:45.687 } 00:11:45.687 } 00:11:45.687 ]' 00:11:45.687 22:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:45.687 22:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:45.687 22:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:45.687 22:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:45.687 22:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:45.687 22:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.687 22:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.687 22:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.946 22:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:01:NTk2Y2FlMmI3ODg3MzAxYTU2NjJhODQyODBjMmNiYTj5Z2bJ: --dhchap-ctrl-secret DHHC-1:02:ZWUzMmJlMzgyMGVjYzE5NTliMGRkYjI4MTBiZjU1MjIyYzdhNDc4NDA3YzJlNDY0Dg5wyw==: 00:11:46.584 22:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.584 22:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:46.584 22:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.584 22:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.584 22:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.584 22:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:46.584 22:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:46.584 22:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:46.843 22:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:11:46.843 22:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:46.843 22:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:46.843 22:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:46.843 22:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:46.843 22:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.843 22:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.843 22:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.843 22:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.843 22:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.843 22:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.843 22:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.410 00:11:47.410 22:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:47.410 22:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.410 22:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:47.410 22:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.410 22:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.410 22:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.410 22:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.410 22:23:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.410 22:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:47.410 { 00:11:47.410 "cntlid": 85, 00:11:47.411 "qid": 0, 00:11:47.411 "state": "enabled", 00:11:47.411 "thread": "nvmf_tgt_poll_group_000", 00:11:47.411 "listen_address": { 00:11:47.411 "trtype": "TCP", 00:11:47.411 "adrfam": "IPv4", 00:11:47.411 "traddr": "10.0.0.2", 00:11:47.411 "trsvcid": "4420" 00:11:47.411 }, 00:11:47.411 "peer_address": { 00:11:47.411 "trtype": "TCP", 00:11:47.411 "adrfam": "IPv4", 00:11:47.411 "traddr": "10.0.0.1", 00:11:47.411 "trsvcid": "56208" 00:11:47.411 }, 00:11:47.411 "auth": { 00:11:47.411 "state": "completed", 00:11:47.411 "digest": "sha384", 00:11:47.411 "dhgroup": "ffdhe6144" 00:11:47.411 } 00:11:47.411 } 00:11:47.411 ]' 00:11:47.411 22:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:47.669 22:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:47.669 22:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:47.669 22:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:47.669 22:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:47.669 22:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.669 22:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.669 22:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.928 22:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:02:MTgxNTdjYmM1MTY5ZmVmNWQ1MTU1YjZhZGRhZmVmYzY2MWVhMDYxMjdjNjhkNTJip2Pi4g==: --dhchap-ctrl-secret DHHC-1:01:YjllZTEyOTNlMzY3NGM4NGZkNzBhYzQ1NTU4YmI3YTCoVxs0: 00:11:48.495 22:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.495 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:48.495 22:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.495 22:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.495 22:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.495 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:48.495 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:48.495 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:48.754 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:11:48.754 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:48.754 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:48.754 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:48.754 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:48.754 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.754 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key3 00:11:48.754 22:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.754 22:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.754 22:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.754 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:48.754 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:49.013 00:11:49.013 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:49.013 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:49.013 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.293 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.293 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.293 22:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.293 22:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.293 22:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.293 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:49.293 { 00:11:49.293 "cntlid": 87, 00:11:49.293 "qid": 0, 00:11:49.293 "state": "enabled", 00:11:49.293 "thread": "nvmf_tgt_poll_group_000", 00:11:49.293 "listen_address": { 00:11:49.293 "trtype": "TCP", 00:11:49.293 "adrfam": "IPv4", 00:11:49.293 "traddr": "10.0.0.2", 00:11:49.293 "trsvcid": "4420" 00:11:49.293 }, 00:11:49.293 "peer_address": { 00:11:49.293 "trtype": "TCP", 00:11:49.293 "adrfam": "IPv4", 00:11:49.293 "traddr": "10.0.0.1", 00:11:49.293 "trsvcid": "58290" 00:11:49.293 }, 00:11:49.293 "auth": { 00:11:49.293 "state": "completed", 00:11:49.293 "digest": "sha384", 00:11:49.293 "dhgroup": "ffdhe6144" 00:11:49.293 } 00:11:49.293 } 00:11:49.293 ]' 00:11:49.293 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:49.293 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:49.293 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:49.552 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:49.552 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:49.552 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.552 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.552 22:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.810 22:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:03:ZjljNjViNmY5OGY2MzU1MzQ4YzZhMWRlMTY2NzhmNmY0MjkyYjgyMzMyYTY3MTQ1MDY1MjAyMTBlYzMzYTQxZaD2MKQ=: 00:11:50.376 22:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.376 22:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:50.376 22:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.376 22:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.376 22:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.376 22:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:50.376 22:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:50.377 22:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:50.377 22:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:50.635 22:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:11:50.635 22:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:50.635 22:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:50.635 22:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:50.635 22:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:50.635 22:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.635 22:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.635 22:23:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.635 22:23:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.635 22:23:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.635 22:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.635 22:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.202 00:11:51.202 22:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:51.202 22:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:51.202 22:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.202 22:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.202 22:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.202 22:23:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.202 22:23:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.202 22:23:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.202 22:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:51.202 { 00:11:51.202 "cntlid": 89, 00:11:51.202 "qid": 0, 00:11:51.202 "state": "enabled", 00:11:51.202 "thread": "nvmf_tgt_poll_group_000", 00:11:51.202 "listen_address": { 00:11:51.202 "trtype": "TCP", 00:11:51.202 "adrfam": "IPv4", 00:11:51.202 "traddr": "10.0.0.2", 00:11:51.202 "trsvcid": "4420" 00:11:51.202 }, 00:11:51.202 "peer_address": { 00:11:51.202 "trtype": "TCP", 00:11:51.202 "adrfam": "IPv4", 00:11:51.202 "traddr": "10.0.0.1", 00:11:51.202 "trsvcid": "58324" 00:11:51.202 }, 00:11:51.202 "auth": { 00:11:51.202 "state": "completed", 00:11:51.202 "digest": "sha384", 00:11:51.202 "dhgroup": "ffdhe8192" 00:11:51.202 } 00:11:51.202 } 00:11:51.202 ]' 00:11:51.202 22:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:51.461 22:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:51.461 22:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:51.461 22:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:51.461 22:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:51.461 22:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.461 22:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.461 22:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.719 22:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:00:ZTA5ZDUzM2YyZGYzNjY1OTdhYzNiNGVhYTIwZjRjOTkwYTI5M2M1NDNmNjY5YmFkLnACuw==: --dhchap-ctrl-secret DHHC-1:03:MDNiOWQ2ZDFjZTBmY2IxNDc1ZjlmMWFlZDRlNDVkNGNkNmU4NzM0ZThlMGM0NjZiODQwYTNjNTA1MDQxMDY1Npgk4os=: 00:11:52.292 22:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.292 22:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:52.292 22:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.292 22:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.292 22:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.292 22:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:52.292 22:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:52.292 22:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:52.560 22:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:11:52.560 22:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:52.560 22:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:52.560 22:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:52.560 22:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:52.560 22:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.560 22:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.560 22:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.560 22:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.560 22:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.560 22:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.560 22:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.128 00:11:53.128 22:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:53.128 22:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.128 22:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:53.128 22:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.128 22:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.128 22:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.128 22:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.387 22:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.387 22:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:53.387 { 00:11:53.387 "cntlid": 91, 00:11:53.387 "qid": 0, 00:11:53.387 "state": "enabled", 00:11:53.387 "thread": "nvmf_tgt_poll_group_000", 00:11:53.387 "listen_address": { 00:11:53.387 "trtype": "TCP", 00:11:53.387 "adrfam": "IPv4", 00:11:53.387 "traddr": "10.0.0.2", 00:11:53.387 "trsvcid": "4420" 00:11:53.387 }, 00:11:53.387 "peer_address": { 00:11:53.387 "trtype": "TCP", 00:11:53.387 "adrfam": "IPv4", 00:11:53.387 "traddr": "10.0.0.1", 00:11:53.387 "trsvcid": "58354" 00:11:53.387 }, 00:11:53.387 "auth": { 00:11:53.387 "state": "completed", 00:11:53.387 "digest": "sha384", 00:11:53.387 "dhgroup": "ffdhe8192" 00:11:53.387 } 00:11:53.387 } 00:11:53.387 ]' 00:11:53.387 22:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:53.387 22:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:53.387 22:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:53.387 22:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:53.387 22:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:53.387 22:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.387 22:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.387 22:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.646 22:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:01:NTk2Y2FlMmI3ODg3MzAxYTU2NjJhODQyODBjMmNiYTj5Z2bJ: --dhchap-ctrl-secret DHHC-1:02:ZWUzMmJlMzgyMGVjYzE5NTliMGRkYjI4MTBiZjU1MjIyYzdhNDc4NDA3YzJlNDY0Dg5wyw==: 00:11:54.215 22:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.215 22:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:54.215 22:23:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.215 22:23:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.215 22:23:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.215 22:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:54.215 22:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:54.215 22:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:54.474 22:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:11:54.474 22:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:54.474 22:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:54.474 22:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:54.474 22:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:54.474 22:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.474 22:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.474 22:23:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.474 22:23:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.474 22:23:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.474 22:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.474 22:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.042 00:11:55.042 22:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:55.042 22:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:55.042 22:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.298 22:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.298 22:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.298 22:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.298 22:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.298 22:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.298 22:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:55.298 { 00:11:55.298 "cntlid": 93, 00:11:55.298 "qid": 0, 00:11:55.298 "state": "enabled", 00:11:55.298 "thread": "nvmf_tgt_poll_group_000", 00:11:55.298 "listen_address": { 00:11:55.298 "trtype": "TCP", 00:11:55.298 "adrfam": "IPv4", 00:11:55.298 "traddr": "10.0.0.2", 00:11:55.298 "trsvcid": "4420" 00:11:55.298 }, 00:11:55.298 "peer_address": { 00:11:55.298 "trtype": "TCP", 00:11:55.298 "adrfam": "IPv4", 00:11:55.298 "traddr": "10.0.0.1", 00:11:55.298 "trsvcid": "58382" 00:11:55.298 }, 00:11:55.298 "auth": { 00:11:55.298 "state": "completed", 00:11:55.298 "digest": "sha384", 00:11:55.298 "dhgroup": "ffdhe8192" 00:11:55.298 } 00:11:55.298 } 00:11:55.298 ]' 00:11:55.298 22:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.298 22:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:55.298 22:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.298 22:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:55.298 22:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:55.298 22:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.298 22:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.298 22:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.563 22:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:02:MTgxNTdjYmM1MTY5ZmVmNWQ1MTU1YjZhZGRhZmVmYzY2MWVhMDYxMjdjNjhkNTJip2Pi4g==: --dhchap-ctrl-secret DHHC-1:01:YjllZTEyOTNlMzY3NGM4NGZkNzBhYzQ1NTU4YmI3YTCoVxs0: 00:11:56.142 22:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.142 22:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:56.142 22:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.142 22:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.142 22:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.142 22:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:56.142 22:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:56.142 22:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:56.400 22:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:11:56.400 22:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:56.400 22:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:56.400 22:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:56.400 22:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:56.400 22:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.400 22:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key3 00:11:56.400 22:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.400 22:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.400 22:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.400 22:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:56.400 22:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:56.966 00:11:56.966 22:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:56.966 22:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:56.966 22:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.225 22:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.225 22:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.225 22:23:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.225 22:23:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.225 22:23:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.225 22:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:57.225 { 00:11:57.225 "cntlid": 95, 00:11:57.225 "qid": 0, 00:11:57.225 "state": "enabled", 00:11:57.225 "thread": "nvmf_tgt_poll_group_000", 00:11:57.225 "listen_address": { 00:11:57.225 "trtype": "TCP", 00:11:57.225 "adrfam": "IPv4", 00:11:57.225 "traddr": "10.0.0.2", 00:11:57.225 "trsvcid": "4420" 00:11:57.225 }, 00:11:57.225 "peer_address": { 00:11:57.225 "trtype": "TCP", 00:11:57.225 "adrfam": "IPv4", 00:11:57.225 "traddr": "10.0.0.1", 00:11:57.225 "trsvcid": "58418" 00:11:57.225 }, 00:11:57.225 "auth": { 00:11:57.225 "state": "completed", 00:11:57.225 "digest": "sha384", 00:11:57.225 "dhgroup": "ffdhe8192" 00:11:57.225 } 00:11:57.225 } 00:11:57.225 ]' 00:11:57.225 22:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:57.225 22:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:57.225 22:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:57.225 22:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:57.225 22:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:57.484 22:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.484 22:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.484 22:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.484 22:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:03:ZjljNjViNmY5OGY2MzU1MzQ4YzZhMWRlMTY2NzhmNmY0MjkyYjgyMzMyYTY3MTQ1MDY1MjAyMTBlYzMzYTQxZaD2MKQ=: 00:11:58.050 22:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.050 22:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:58.050 22:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.050 22:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.050 22:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.050 22:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:58.050 22:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:58.050 22:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:58.050 22:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:58.050 22:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:58.308 22:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:11:58.308 22:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:58.308 22:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:58.308 22:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:58.308 22:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:58.308 22:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.308 22:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.308 22:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.308 22:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.308 22:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.308 22:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.308 22:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.566 00:11:58.566 22:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:58.566 22:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.566 22:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:58.824 22:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.824 22:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.824 22:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.824 22:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.824 22:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.824 22:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:58.824 { 00:11:58.824 "cntlid": 97, 00:11:58.824 "qid": 0, 00:11:58.824 "state": "enabled", 00:11:58.824 "thread": "nvmf_tgt_poll_group_000", 00:11:58.824 "listen_address": { 00:11:58.824 "trtype": "TCP", 00:11:58.824 "adrfam": "IPv4", 00:11:58.824 "traddr": "10.0.0.2", 00:11:58.824 "trsvcid": "4420" 00:11:58.824 }, 00:11:58.824 "peer_address": { 00:11:58.824 "trtype": "TCP", 00:11:58.824 "adrfam": "IPv4", 00:11:58.824 "traddr": "10.0.0.1", 00:11:58.824 "trsvcid": "60908" 00:11:58.824 }, 00:11:58.824 "auth": { 00:11:58.824 "state": "completed", 00:11:58.824 "digest": "sha512", 00:11:58.824 "dhgroup": "null" 00:11:58.824 } 00:11:58.824 } 00:11:58.824 ]' 00:11:58.824 22:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:58.824 22:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:58.824 22:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:59.082 22:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:59.082 22:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:59.082 22:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.082 22:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.082 22:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.340 22:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:00:ZTA5ZDUzM2YyZGYzNjY1OTdhYzNiNGVhYTIwZjRjOTkwYTI5M2M1NDNmNjY5YmFkLnACuw==: --dhchap-ctrl-secret DHHC-1:03:MDNiOWQ2ZDFjZTBmY2IxNDc1ZjlmMWFlZDRlNDVkNGNkNmU4NzM0ZThlMGM0NjZiODQwYTNjNTA1MDQxMDY1Npgk4os=: 00:11:59.904 22:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.905 22:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:11:59.905 22:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.905 22:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.905 22:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.905 22:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:59.905 22:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:59.905 22:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:00.163 22:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:12:00.163 22:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:00.163 22:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:00.163 22:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:00.163 22:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:00.163 22:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.163 22:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.163 22:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.163 22:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.163 22:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.163 22:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.163 22:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.423 00:12:00.423 22:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:00.423 22:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.423 22:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:00.423 22:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.423 22:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.423 22:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.423 22:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.423 22:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.423 22:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:00.423 { 00:12:00.423 "cntlid": 99, 00:12:00.423 "qid": 0, 00:12:00.423 "state": "enabled", 00:12:00.423 "thread": "nvmf_tgt_poll_group_000", 00:12:00.423 "listen_address": { 00:12:00.423 "trtype": "TCP", 00:12:00.423 "adrfam": "IPv4", 00:12:00.423 "traddr": "10.0.0.2", 00:12:00.423 "trsvcid": "4420" 00:12:00.423 }, 00:12:00.423 "peer_address": { 00:12:00.423 "trtype": "TCP", 00:12:00.423 "adrfam": "IPv4", 00:12:00.423 "traddr": "10.0.0.1", 00:12:00.423 "trsvcid": "60930" 00:12:00.423 }, 00:12:00.423 "auth": { 00:12:00.423 "state": "completed", 00:12:00.423 "digest": "sha512", 00:12:00.423 "dhgroup": "null" 00:12:00.423 } 00:12:00.423 } 00:12:00.423 ]' 00:12:00.423 22:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:00.704 22:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:00.704 22:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:00.704 22:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:00.704 22:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:00.704 22:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.704 22:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.704 22:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.971 22:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:01:NTk2Y2FlMmI3ODg3MzAxYTU2NjJhODQyODBjMmNiYTj5Z2bJ: --dhchap-ctrl-secret DHHC-1:02:ZWUzMmJlMzgyMGVjYzE5NTliMGRkYjI4MTBiZjU1MjIyYzdhNDc4NDA3YzJlNDY0Dg5wyw==: 00:12:01.536 22:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.536 22:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:01.536 22:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.536 22:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.536 22:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.536 22:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:01.536 22:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:01.536 22:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:01.793 22:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:12:01.793 22:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:01.793 22:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:01.793 22:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:01.793 22:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:01.793 22:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.793 22:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:01.793 22:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.793 22:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.793 22:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.793 22:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:01.793 22:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.050 00:12:02.051 22:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:02.051 22:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:02.051 22:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.308 22:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.308 22:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.308 22:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.308 22:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.308 22:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.308 22:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:02.308 { 00:12:02.308 "cntlid": 101, 00:12:02.308 "qid": 0, 00:12:02.308 "state": "enabled", 00:12:02.308 "thread": "nvmf_tgt_poll_group_000", 00:12:02.308 "listen_address": { 00:12:02.308 "trtype": "TCP", 00:12:02.308 "adrfam": "IPv4", 00:12:02.308 "traddr": "10.0.0.2", 00:12:02.308 "trsvcid": "4420" 00:12:02.308 }, 00:12:02.308 "peer_address": { 00:12:02.308 "trtype": "TCP", 00:12:02.308 "adrfam": "IPv4", 00:12:02.308 "traddr": "10.0.0.1", 00:12:02.308 "trsvcid": "60970" 00:12:02.308 }, 00:12:02.308 "auth": { 00:12:02.308 "state": "completed", 00:12:02.308 "digest": "sha512", 00:12:02.308 "dhgroup": "null" 00:12:02.308 } 00:12:02.308 } 00:12:02.308 ]' 00:12:02.308 22:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:02.308 22:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:02.308 22:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:02.308 22:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:02.308 22:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:02.308 22:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.308 22:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.308 22:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.568 22:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:02:MTgxNTdjYmM1MTY5ZmVmNWQ1MTU1YjZhZGRhZmVmYzY2MWVhMDYxMjdjNjhkNTJip2Pi4g==: --dhchap-ctrl-secret DHHC-1:01:YjllZTEyOTNlMzY3NGM4NGZkNzBhYzQ1NTU4YmI3YTCoVxs0: 00:12:03.134 22:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.134 22:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:03.134 22:23:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.134 22:23:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.134 22:23:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.134 22:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:03.134 22:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:03.134 22:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:03.392 22:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:12:03.392 22:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:03.392 22:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:03.392 22:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:03.392 22:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:03.392 22:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.392 22:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key3 00:12:03.392 22:23:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.392 22:23:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.392 22:23:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.392 22:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:03.392 22:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:03.651 00:12:03.651 22:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:03.651 22:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:03.651 22:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.909 22:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.909 22:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.909 22:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.909 22:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.909 22:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.909 22:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:03.909 { 00:12:03.909 "cntlid": 103, 00:12:03.909 "qid": 0, 00:12:03.909 "state": "enabled", 00:12:03.909 "thread": "nvmf_tgt_poll_group_000", 00:12:03.909 "listen_address": { 00:12:03.909 "trtype": "TCP", 00:12:03.909 "adrfam": "IPv4", 00:12:03.909 "traddr": "10.0.0.2", 00:12:03.909 "trsvcid": "4420" 00:12:03.909 }, 00:12:03.909 "peer_address": { 00:12:03.909 "trtype": "TCP", 00:12:03.909 "adrfam": "IPv4", 00:12:03.909 "traddr": "10.0.0.1", 00:12:03.909 "trsvcid": "32774" 00:12:03.909 }, 00:12:03.909 "auth": { 00:12:03.909 "state": "completed", 00:12:03.909 "digest": "sha512", 00:12:03.909 "dhgroup": "null" 00:12:03.909 } 00:12:03.909 } 00:12:03.909 ]' 00:12:03.909 22:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:03.909 22:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:03.909 22:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:03.909 22:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:03.909 22:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:03.909 22:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.909 22:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.909 22:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.167 22:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:03:ZjljNjViNmY5OGY2MzU1MzQ4YzZhMWRlMTY2NzhmNmY0MjkyYjgyMzMyYTY3MTQ1MDY1MjAyMTBlYzMzYTQxZaD2MKQ=: 00:12:04.732 22:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.732 22:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:04.732 22:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.732 22:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.732 22:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.732 22:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:04.732 22:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:04.732 22:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:04.732 22:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:04.990 22:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:12:04.990 22:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:04.990 22:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:04.990 22:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:04.990 22:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:04.990 22:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.990 22:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.990 22:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.990 22:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.990 22:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.990 22:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.990 22:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:05.248 00:12:05.248 22:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:05.248 22:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.248 22:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:05.507 22:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.507 22:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.507 22:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.507 22:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.507 22:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.507 22:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:05.507 { 00:12:05.507 "cntlid": 105, 00:12:05.507 "qid": 0, 00:12:05.507 "state": "enabled", 00:12:05.507 "thread": "nvmf_tgt_poll_group_000", 00:12:05.507 "listen_address": { 00:12:05.507 "trtype": "TCP", 00:12:05.507 "adrfam": "IPv4", 00:12:05.507 "traddr": "10.0.0.2", 00:12:05.507 "trsvcid": "4420" 00:12:05.507 }, 00:12:05.507 "peer_address": { 00:12:05.507 "trtype": "TCP", 00:12:05.507 "adrfam": "IPv4", 00:12:05.507 "traddr": "10.0.0.1", 00:12:05.507 "trsvcid": "32784" 00:12:05.507 }, 00:12:05.507 "auth": { 00:12:05.507 "state": "completed", 00:12:05.507 "digest": "sha512", 00:12:05.507 "dhgroup": "ffdhe2048" 00:12:05.507 } 00:12:05.507 } 00:12:05.507 ]' 00:12:05.507 22:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:05.507 22:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:05.507 22:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:05.507 22:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:05.507 22:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:05.507 22:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.507 22:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.507 22:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.765 22:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:00:ZTA5ZDUzM2YyZGYzNjY1OTdhYzNiNGVhYTIwZjRjOTkwYTI5M2M1NDNmNjY5YmFkLnACuw==: --dhchap-ctrl-secret DHHC-1:03:MDNiOWQ2ZDFjZTBmY2IxNDc1ZjlmMWFlZDRlNDVkNGNkNmU4NzM0ZThlMGM0NjZiODQwYTNjNTA1MDQxMDY1Npgk4os=: 00:12:06.332 22:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.332 22:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:06.332 22:23:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.332 22:23:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.332 22:23:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.332 22:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:06.332 22:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:06.332 22:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:06.590 22:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:12:06.590 22:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:06.590 22:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:06.590 22:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:06.590 22:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:06.590 22:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.590 22:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.590 22:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.590 22:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.590 22:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.590 22:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.590 22:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.851 00:12:06.851 22:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:06.851 22:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:06.851 22:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.109 22:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.109 22:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.109 22:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.109 22:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.109 22:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.109 22:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:07.109 { 00:12:07.109 "cntlid": 107, 00:12:07.109 "qid": 0, 00:12:07.109 "state": "enabled", 00:12:07.109 "thread": "nvmf_tgt_poll_group_000", 00:12:07.109 "listen_address": { 00:12:07.109 "trtype": "TCP", 00:12:07.109 "adrfam": "IPv4", 00:12:07.109 "traddr": "10.0.0.2", 00:12:07.109 "trsvcid": "4420" 00:12:07.109 }, 00:12:07.109 "peer_address": { 00:12:07.109 "trtype": "TCP", 00:12:07.109 "adrfam": "IPv4", 00:12:07.109 "traddr": "10.0.0.1", 00:12:07.109 "trsvcid": "32792" 00:12:07.109 }, 00:12:07.109 "auth": { 00:12:07.109 "state": "completed", 00:12:07.109 "digest": "sha512", 00:12:07.109 "dhgroup": "ffdhe2048" 00:12:07.109 } 00:12:07.109 } 00:12:07.109 ]' 00:12:07.109 22:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:07.109 22:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:07.109 22:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:07.109 22:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:07.109 22:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:07.368 22:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.368 22:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.368 22:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.368 22:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:01:NTk2Y2FlMmI3ODg3MzAxYTU2NjJhODQyODBjMmNiYTj5Z2bJ: --dhchap-ctrl-secret DHHC-1:02:ZWUzMmJlMzgyMGVjYzE5NTliMGRkYjI4MTBiZjU1MjIyYzdhNDc4NDA3YzJlNDY0Dg5wyw==: 00:12:07.935 22:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.935 22:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:07.935 22:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.935 22:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.935 22:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.935 22:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:07.935 22:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:07.935 22:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:08.193 22:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:12:08.193 22:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:08.193 22:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:08.193 22:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:08.193 22:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:08.193 22:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.193 22:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.193 22:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.193 22:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.193 22:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.193 22:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.193 22:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.451 00:12:08.451 22:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:08.451 22:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.451 22:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:08.709 22:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.709 22:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.709 22:23:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.709 22:23:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.709 22:23:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.709 22:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:08.709 { 00:12:08.709 "cntlid": 109, 00:12:08.709 "qid": 0, 00:12:08.709 "state": "enabled", 00:12:08.709 "thread": "nvmf_tgt_poll_group_000", 00:12:08.709 "listen_address": { 00:12:08.709 "trtype": "TCP", 00:12:08.709 "adrfam": "IPv4", 00:12:08.709 "traddr": "10.0.0.2", 00:12:08.709 "trsvcid": "4420" 00:12:08.709 }, 00:12:08.709 "peer_address": { 00:12:08.709 "trtype": "TCP", 00:12:08.709 "adrfam": "IPv4", 00:12:08.709 "traddr": "10.0.0.1", 00:12:08.709 "trsvcid": "50782" 00:12:08.709 }, 00:12:08.709 "auth": { 00:12:08.709 "state": "completed", 00:12:08.709 "digest": "sha512", 00:12:08.709 "dhgroup": "ffdhe2048" 00:12:08.709 } 00:12:08.709 } 00:12:08.709 ]' 00:12:08.709 22:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:08.709 22:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:08.710 22:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:08.710 22:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:08.710 22:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:08.973 22:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.973 22:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.973 22:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.973 22:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:02:MTgxNTdjYmM1MTY5ZmVmNWQ1MTU1YjZhZGRhZmVmYzY2MWVhMDYxMjdjNjhkNTJip2Pi4g==: --dhchap-ctrl-secret DHHC-1:01:YjllZTEyOTNlMzY3NGM4NGZkNzBhYzQ1NTU4YmI3YTCoVxs0: 00:12:09.549 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.549 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:09.549 22:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.549 22:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.549 22:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.549 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:09.549 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:09.549 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:09.805 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:12:09.805 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:09.805 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:09.805 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:09.805 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:09.805 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.805 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key3 00:12:09.805 22:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.805 22:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.805 22:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.805 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:09.805 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:10.061 00:12:10.061 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:10.061 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.061 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:10.318 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.318 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.318 22:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.318 22:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.318 22:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.318 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:10.318 { 00:12:10.318 "cntlid": 111, 00:12:10.318 "qid": 0, 00:12:10.318 "state": "enabled", 00:12:10.318 "thread": "nvmf_tgt_poll_group_000", 00:12:10.318 "listen_address": { 00:12:10.318 "trtype": "TCP", 00:12:10.318 "adrfam": "IPv4", 00:12:10.318 "traddr": "10.0.0.2", 00:12:10.318 "trsvcid": "4420" 00:12:10.318 }, 00:12:10.318 "peer_address": { 00:12:10.318 "trtype": "TCP", 00:12:10.318 "adrfam": "IPv4", 00:12:10.318 "traddr": "10.0.0.1", 00:12:10.318 "trsvcid": "50810" 00:12:10.318 }, 00:12:10.318 "auth": { 00:12:10.318 "state": "completed", 00:12:10.318 "digest": "sha512", 00:12:10.318 "dhgroup": "ffdhe2048" 00:12:10.318 } 00:12:10.318 } 00:12:10.318 ]' 00:12:10.318 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:10.318 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:10.318 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:10.574 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:10.574 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:10.574 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.574 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.574 22:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.575 22:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:03:ZjljNjViNmY5OGY2MzU1MzQ4YzZhMWRlMTY2NzhmNmY0MjkyYjgyMzMyYTY3MTQ1MDY1MjAyMTBlYzMzYTQxZaD2MKQ=: 00:12:11.138 22:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.138 22:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:11.138 22:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.138 22:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.138 22:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.138 22:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:11.138 22:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:11.138 22:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:11.138 22:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:11.448 22:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:12:11.448 22:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:11.448 22:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:11.448 22:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:11.448 22:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:11.448 22:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.448 22:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.448 22:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.448 22:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.448 22:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.448 22:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.448 22:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.706 00:12:11.706 22:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:11.706 22:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:11.706 22:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.962 22:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.962 22:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.962 22:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.962 22:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.962 22:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.962 22:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:11.962 { 00:12:11.962 "cntlid": 113, 00:12:11.962 "qid": 0, 00:12:11.962 "state": "enabled", 00:12:11.962 "thread": "nvmf_tgt_poll_group_000", 00:12:11.962 "listen_address": { 00:12:11.962 "trtype": "TCP", 00:12:11.962 "adrfam": "IPv4", 00:12:11.962 "traddr": "10.0.0.2", 00:12:11.962 "trsvcid": "4420" 00:12:11.962 }, 00:12:11.962 "peer_address": { 00:12:11.962 "trtype": "TCP", 00:12:11.962 "adrfam": "IPv4", 00:12:11.962 "traddr": "10.0.0.1", 00:12:11.962 "trsvcid": "50844" 00:12:11.962 }, 00:12:11.962 "auth": { 00:12:11.962 "state": "completed", 00:12:11.963 "digest": "sha512", 00:12:11.963 "dhgroup": "ffdhe3072" 00:12:11.963 } 00:12:11.963 } 00:12:11.963 ]' 00:12:11.963 22:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:11.963 22:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:11.963 22:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:11.963 22:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:11.963 22:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:12.219 22:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.219 22:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.219 22:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.219 22:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:00:ZTA5ZDUzM2YyZGYzNjY1OTdhYzNiNGVhYTIwZjRjOTkwYTI5M2M1NDNmNjY5YmFkLnACuw==: --dhchap-ctrl-secret DHHC-1:03:MDNiOWQ2ZDFjZTBmY2IxNDc1ZjlmMWFlZDRlNDVkNGNkNmU4NzM0ZThlMGM0NjZiODQwYTNjNTA1MDQxMDY1Npgk4os=: 00:12:12.782 22:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.782 22:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:12.782 22:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.782 22:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.782 22:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.782 22:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:12.782 22:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:12.782 22:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:13.039 22:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:12:13.039 22:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:13.039 22:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:13.039 22:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:13.039 22:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:13.039 22:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.039 22:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.039 22:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.039 22:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.039 22:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.039 22:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.039 22:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.298 00:12:13.298 22:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:13.298 22:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.298 22:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:13.556 22:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.556 22:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.556 22:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.556 22:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.556 22:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.556 22:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:13.556 { 00:12:13.556 "cntlid": 115, 00:12:13.556 "qid": 0, 00:12:13.556 "state": "enabled", 00:12:13.556 "thread": "nvmf_tgt_poll_group_000", 00:12:13.556 "listen_address": { 00:12:13.556 "trtype": "TCP", 00:12:13.556 "adrfam": "IPv4", 00:12:13.556 "traddr": "10.0.0.2", 00:12:13.556 "trsvcid": "4420" 00:12:13.556 }, 00:12:13.556 "peer_address": { 00:12:13.556 "trtype": "TCP", 00:12:13.556 "adrfam": "IPv4", 00:12:13.556 "traddr": "10.0.0.1", 00:12:13.556 "trsvcid": "50880" 00:12:13.556 }, 00:12:13.556 "auth": { 00:12:13.556 "state": "completed", 00:12:13.556 "digest": "sha512", 00:12:13.556 "dhgroup": "ffdhe3072" 00:12:13.556 } 00:12:13.556 } 00:12:13.556 ]' 00:12:13.556 22:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:13.556 22:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:13.556 22:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:13.556 22:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:13.556 22:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:13.814 22:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.814 22:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.814 22:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.814 22:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:01:NTk2Y2FlMmI3ODg3MzAxYTU2NjJhODQyODBjMmNiYTj5Z2bJ: --dhchap-ctrl-secret DHHC-1:02:ZWUzMmJlMzgyMGVjYzE5NTliMGRkYjI4MTBiZjU1MjIyYzdhNDc4NDA3YzJlNDY0Dg5wyw==: 00:12:14.379 22:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.379 22:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:14.379 22:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.379 22:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.379 22:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.379 22:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:14.379 22:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:14.379 22:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:14.637 22:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:12:14.637 22:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:14.637 22:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:14.637 22:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:14.637 22:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:14.637 22:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.637 22:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.637 22:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.637 22:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.637 22:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.637 22:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.637 22:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.896 00:12:14.896 22:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:14.896 22:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.896 22:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:15.175 22:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.175 22:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.175 22:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.175 22:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.175 22:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.175 22:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:15.175 { 00:12:15.175 "cntlid": 117, 00:12:15.175 "qid": 0, 00:12:15.175 "state": "enabled", 00:12:15.175 "thread": "nvmf_tgt_poll_group_000", 00:12:15.175 "listen_address": { 00:12:15.175 "trtype": "TCP", 00:12:15.175 "adrfam": "IPv4", 00:12:15.175 "traddr": "10.0.0.2", 00:12:15.175 "trsvcid": "4420" 00:12:15.175 }, 00:12:15.175 "peer_address": { 00:12:15.175 "trtype": "TCP", 00:12:15.175 "adrfam": "IPv4", 00:12:15.175 "traddr": "10.0.0.1", 00:12:15.175 "trsvcid": "50902" 00:12:15.175 }, 00:12:15.175 "auth": { 00:12:15.175 "state": "completed", 00:12:15.175 "digest": "sha512", 00:12:15.175 "dhgroup": "ffdhe3072" 00:12:15.175 } 00:12:15.175 } 00:12:15.175 ]' 00:12:15.175 22:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:15.175 22:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:15.175 22:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:15.175 22:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:15.175 22:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:15.175 22:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.175 22:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.175 22:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.433 22:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:02:MTgxNTdjYmM1MTY5ZmVmNWQ1MTU1YjZhZGRhZmVmYzY2MWVhMDYxMjdjNjhkNTJip2Pi4g==: --dhchap-ctrl-secret DHHC-1:01:YjllZTEyOTNlMzY3NGM4NGZkNzBhYzQ1NTU4YmI3YTCoVxs0: 00:12:15.999 22:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.999 22:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:15.999 22:23:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.999 22:23:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.999 22:23:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.999 22:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:16.000 22:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:16.000 22:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:16.257 22:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:12:16.257 22:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:16.257 22:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:16.257 22:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:16.257 22:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:16.257 22:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.257 22:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key3 00:12:16.257 22:23:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.257 22:23:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.257 22:23:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.257 22:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:16.257 22:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:16.515 00:12:16.516 22:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:16.516 22:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.516 22:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:16.773 22:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.773 22:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.773 22:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.773 22:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.773 22:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.773 22:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:16.773 { 00:12:16.773 "cntlid": 119, 00:12:16.773 "qid": 0, 00:12:16.773 "state": "enabled", 00:12:16.773 "thread": "nvmf_tgt_poll_group_000", 00:12:16.773 "listen_address": { 00:12:16.773 "trtype": "TCP", 00:12:16.773 "adrfam": "IPv4", 00:12:16.773 "traddr": "10.0.0.2", 00:12:16.773 "trsvcid": "4420" 00:12:16.773 }, 00:12:16.773 "peer_address": { 00:12:16.773 "trtype": "TCP", 00:12:16.773 "adrfam": "IPv4", 00:12:16.773 "traddr": "10.0.0.1", 00:12:16.773 "trsvcid": "50930" 00:12:16.773 }, 00:12:16.773 "auth": { 00:12:16.773 "state": "completed", 00:12:16.773 "digest": "sha512", 00:12:16.773 "dhgroup": "ffdhe3072" 00:12:16.773 } 00:12:16.773 } 00:12:16.773 ]' 00:12:16.773 22:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:16.773 22:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:16.773 22:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:16.773 22:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:16.773 22:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:16.773 22:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.773 22:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.773 22:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.031 22:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:03:ZjljNjViNmY5OGY2MzU1MzQ4YzZhMWRlMTY2NzhmNmY0MjkyYjgyMzMyYTY3MTQ1MDY1MjAyMTBlYzMzYTQxZaD2MKQ=: 00:12:17.598 22:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.598 22:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:17.598 22:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.598 22:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.598 22:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.598 22:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:17.598 22:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:17.598 22:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:17.598 22:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:17.856 22:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:12:17.856 22:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:17.856 22:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:17.856 22:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:17.856 22:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:17.856 22:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.856 22:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.856 22:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.856 22:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.856 22:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.856 22:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.856 22:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.114 00:12:18.114 22:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:18.114 22:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.114 22:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:18.371 22:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.371 22:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.371 22:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.371 22:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.371 22:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.371 22:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:18.371 { 00:12:18.371 "cntlid": 121, 00:12:18.371 "qid": 0, 00:12:18.371 "state": "enabled", 00:12:18.371 "thread": "nvmf_tgt_poll_group_000", 00:12:18.371 "listen_address": { 00:12:18.371 "trtype": "TCP", 00:12:18.371 "adrfam": "IPv4", 00:12:18.371 "traddr": "10.0.0.2", 00:12:18.371 "trsvcid": "4420" 00:12:18.371 }, 00:12:18.371 "peer_address": { 00:12:18.371 "trtype": "TCP", 00:12:18.371 "adrfam": "IPv4", 00:12:18.371 "traddr": "10.0.0.1", 00:12:18.371 "trsvcid": "60048" 00:12:18.371 }, 00:12:18.371 "auth": { 00:12:18.371 "state": "completed", 00:12:18.371 "digest": "sha512", 00:12:18.371 "dhgroup": "ffdhe4096" 00:12:18.371 } 00:12:18.371 } 00:12:18.371 ]' 00:12:18.371 22:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:18.371 22:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:18.371 22:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:18.371 22:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:18.371 22:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:18.629 22:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.629 22:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.629 22:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.630 22:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:00:ZTA5ZDUzM2YyZGYzNjY1OTdhYzNiNGVhYTIwZjRjOTkwYTI5M2M1NDNmNjY5YmFkLnACuw==: --dhchap-ctrl-secret DHHC-1:03:MDNiOWQ2ZDFjZTBmY2IxNDc1ZjlmMWFlZDRlNDVkNGNkNmU4NzM0ZThlMGM0NjZiODQwYTNjNTA1MDQxMDY1Npgk4os=: 00:12:19.196 22:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.196 22:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:19.196 22:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.196 22:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.196 22:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.196 22:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:19.196 22:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:19.196 22:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:19.454 22:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:12:19.454 22:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:19.454 22:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:19.454 22:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:19.454 22:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:19.454 22:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.454 22:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.454 22:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.454 22:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.454 22:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.454 22:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.454 22:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.713 00:12:19.973 22:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:19.973 22:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:19.973 22:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.973 22:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.973 22:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.973 22:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.973 22:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.973 22:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.973 22:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:19.973 { 00:12:19.973 "cntlid": 123, 00:12:19.973 "qid": 0, 00:12:19.973 "state": "enabled", 00:12:19.973 "thread": "nvmf_tgt_poll_group_000", 00:12:19.973 "listen_address": { 00:12:19.973 "trtype": "TCP", 00:12:19.973 "adrfam": "IPv4", 00:12:19.973 "traddr": "10.0.0.2", 00:12:19.973 "trsvcid": "4420" 00:12:19.973 }, 00:12:19.973 "peer_address": { 00:12:19.973 "trtype": "TCP", 00:12:19.973 "adrfam": "IPv4", 00:12:19.973 "traddr": "10.0.0.1", 00:12:19.973 "trsvcid": "60082" 00:12:19.973 }, 00:12:19.973 "auth": { 00:12:19.973 "state": "completed", 00:12:19.973 "digest": "sha512", 00:12:19.973 "dhgroup": "ffdhe4096" 00:12:19.973 } 00:12:19.973 } 00:12:19.973 ]' 00:12:19.973 22:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:20.231 22:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:20.231 22:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:20.231 22:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:20.231 22:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:20.231 22:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.231 22:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.231 22:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.490 22:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:01:NTk2Y2FlMmI3ODg3MzAxYTU2NjJhODQyODBjMmNiYTj5Z2bJ: --dhchap-ctrl-secret DHHC-1:02:ZWUzMmJlMzgyMGVjYzE5NTliMGRkYjI4MTBiZjU1MjIyYzdhNDc4NDA3YzJlNDY0Dg5wyw==: 00:12:21.055 22:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.055 22:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:21.055 22:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.055 22:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.055 22:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.055 22:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:21.055 22:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:21.055 22:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:21.055 22:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:12:21.055 22:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:21.055 22:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:21.055 22:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:21.055 22:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:21.055 22:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.055 22:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.055 22:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.055 22:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.055 22:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.055 22:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.055 22:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.313 00:12:21.571 22:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:21.571 22:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:21.571 22:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.571 22:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.571 22:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.571 22:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.571 22:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.571 22:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.571 22:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:21.571 { 00:12:21.571 "cntlid": 125, 00:12:21.571 "qid": 0, 00:12:21.571 "state": "enabled", 00:12:21.571 "thread": "nvmf_tgt_poll_group_000", 00:12:21.571 "listen_address": { 00:12:21.571 "trtype": "TCP", 00:12:21.571 "adrfam": "IPv4", 00:12:21.571 "traddr": "10.0.0.2", 00:12:21.571 "trsvcid": "4420" 00:12:21.571 }, 00:12:21.571 "peer_address": { 00:12:21.571 "trtype": "TCP", 00:12:21.571 "adrfam": "IPv4", 00:12:21.571 "traddr": "10.0.0.1", 00:12:21.571 "trsvcid": "60114" 00:12:21.571 }, 00:12:21.571 "auth": { 00:12:21.571 "state": "completed", 00:12:21.571 "digest": "sha512", 00:12:21.571 "dhgroup": "ffdhe4096" 00:12:21.571 } 00:12:21.571 } 00:12:21.571 ]' 00:12:21.571 22:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:21.829 22:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:21.829 22:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:21.829 22:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:21.829 22:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:21.829 22:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.829 22:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.829 22:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.086 22:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:02:MTgxNTdjYmM1MTY5ZmVmNWQ1MTU1YjZhZGRhZmVmYzY2MWVhMDYxMjdjNjhkNTJip2Pi4g==: --dhchap-ctrl-secret DHHC-1:01:YjllZTEyOTNlMzY3NGM4NGZkNzBhYzQ1NTU4YmI3YTCoVxs0: 00:12:22.653 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.653 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:22.653 22:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.653 22:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.653 22:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.653 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:22.653 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:22.653 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:22.910 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:12:22.910 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:22.910 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:22.910 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:22.910 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:22.910 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.910 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key3 00:12:22.910 22:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.910 22:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.910 22:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.910 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:22.910 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:23.167 00:12:23.167 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:23.167 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:23.167 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.167 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.167 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.167 22:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.167 22:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.423 22:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.423 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:23.423 { 00:12:23.423 "cntlid": 127, 00:12:23.423 "qid": 0, 00:12:23.423 "state": "enabled", 00:12:23.423 "thread": "nvmf_tgt_poll_group_000", 00:12:23.423 "listen_address": { 00:12:23.423 "trtype": "TCP", 00:12:23.423 "adrfam": "IPv4", 00:12:23.423 "traddr": "10.0.0.2", 00:12:23.423 "trsvcid": "4420" 00:12:23.423 }, 00:12:23.423 "peer_address": { 00:12:23.423 "trtype": "TCP", 00:12:23.423 "adrfam": "IPv4", 00:12:23.423 "traddr": "10.0.0.1", 00:12:23.423 "trsvcid": "60148" 00:12:23.423 }, 00:12:23.423 "auth": { 00:12:23.423 "state": "completed", 00:12:23.423 "digest": "sha512", 00:12:23.423 "dhgroup": "ffdhe4096" 00:12:23.423 } 00:12:23.423 } 00:12:23.423 ]' 00:12:23.423 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:23.423 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:23.423 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:23.423 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:23.423 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:23.423 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.423 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.423 22:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.680 22:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:03:ZjljNjViNmY5OGY2MzU1MzQ4YzZhMWRlMTY2NzhmNmY0MjkyYjgyMzMyYTY3MTQ1MDY1MjAyMTBlYzMzYTQxZaD2MKQ=: 00:12:24.242 22:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.242 22:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:24.242 22:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.242 22:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.242 22:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.242 22:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:24.242 22:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:24.242 22:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:24.242 22:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:24.498 22:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:12:24.498 22:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:24.498 22:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:24.498 22:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:24.498 22:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:24.498 22:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.498 22:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.498 22:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.498 22:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.498 22:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.498 22:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.498 22:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.755 00:12:24.755 22:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:24.755 22:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.755 22:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:25.013 22:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.013 22:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.013 22:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.013 22:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.013 22:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.013 22:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:25.013 { 00:12:25.013 "cntlid": 129, 00:12:25.013 "qid": 0, 00:12:25.013 "state": "enabled", 00:12:25.013 "thread": "nvmf_tgt_poll_group_000", 00:12:25.013 "listen_address": { 00:12:25.013 "trtype": "TCP", 00:12:25.013 "adrfam": "IPv4", 00:12:25.013 "traddr": "10.0.0.2", 00:12:25.013 "trsvcid": "4420" 00:12:25.013 }, 00:12:25.013 "peer_address": { 00:12:25.013 "trtype": "TCP", 00:12:25.013 "adrfam": "IPv4", 00:12:25.013 "traddr": "10.0.0.1", 00:12:25.013 "trsvcid": "60178" 00:12:25.013 }, 00:12:25.013 "auth": { 00:12:25.013 "state": "completed", 00:12:25.013 "digest": "sha512", 00:12:25.013 "dhgroup": "ffdhe6144" 00:12:25.013 } 00:12:25.013 } 00:12:25.013 ]' 00:12:25.013 22:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:25.013 22:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:25.013 22:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:25.013 22:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:25.013 22:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:25.270 22:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.270 22:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.270 22:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.270 22:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:00:ZTA5ZDUzM2YyZGYzNjY1OTdhYzNiNGVhYTIwZjRjOTkwYTI5M2M1NDNmNjY5YmFkLnACuw==: --dhchap-ctrl-secret DHHC-1:03:MDNiOWQ2ZDFjZTBmY2IxNDc1ZjlmMWFlZDRlNDVkNGNkNmU4NzM0ZThlMGM0NjZiODQwYTNjNTA1MDQxMDY1Npgk4os=: 00:12:25.835 22:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.835 22:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:25.835 22:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.835 22:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.092 22:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.092 22:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:26.092 22:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:26.092 22:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:26.092 22:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:12:26.092 22:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:26.092 22:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:26.092 22:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:26.092 22:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:26.092 22:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.092 22:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.092 22:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.092 22:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.092 22:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.092 22:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.092 22:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.657 00:12:26.657 22:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:26.657 22:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:26.657 22:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.914 22:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.914 22:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.914 22:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.914 22:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.914 22:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.914 22:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:26.914 { 00:12:26.914 "cntlid": 131, 00:12:26.914 "qid": 0, 00:12:26.914 "state": "enabled", 00:12:26.914 "thread": "nvmf_tgt_poll_group_000", 00:12:26.914 "listen_address": { 00:12:26.914 "trtype": "TCP", 00:12:26.914 "adrfam": "IPv4", 00:12:26.914 "traddr": "10.0.0.2", 00:12:26.914 "trsvcid": "4420" 00:12:26.914 }, 00:12:26.914 "peer_address": { 00:12:26.914 "trtype": "TCP", 00:12:26.914 "adrfam": "IPv4", 00:12:26.914 "traddr": "10.0.0.1", 00:12:26.914 "trsvcid": "60218" 00:12:26.914 }, 00:12:26.914 "auth": { 00:12:26.914 "state": "completed", 00:12:26.914 "digest": "sha512", 00:12:26.914 "dhgroup": "ffdhe6144" 00:12:26.914 } 00:12:26.914 } 00:12:26.914 ]' 00:12:26.914 22:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:26.914 22:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:26.914 22:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:26.914 22:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:26.914 22:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:27.172 22:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.172 22:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.172 22:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.172 22:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:01:NTk2Y2FlMmI3ODg3MzAxYTU2NjJhODQyODBjMmNiYTj5Z2bJ: --dhchap-ctrl-secret DHHC-1:02:ZWUzMmJlMzgyMGVjYzE5NTliMGRkYjI4MTBiZjU1MjIyYzdhNDc4NDA3YzJlNDY0Dg5wyw==: 00:12:27.737 22:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.737 22:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:27.737 22:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.737 22:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.737 22:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.737 22:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:27.737 22:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:27.737 22:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:28.008 22:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:12:28.008 22:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:28.008 22:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:28.008 22:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:28.008 22:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:28.008 22:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.008 22:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.008 22:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.008 22:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.008 22:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.008 22:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.008 22:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.599 00:12:28.599 22:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:28.599 22:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:28.599 22:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.599 22:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.599 22:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.599 22:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.599 22:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.599 22:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.599 22:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:28.599 { 00:12:28.599 "cntlid": 133, 00:12:28.599 "qid": 0, 00:12:28.599 "state": "enabled", 00:12:28.599 "thread": "nvmf_tgt_poll_group_000", 00:12:28.599 "listen_address": { 00:12:28.600 "trtype": "TCP", 00:12:28.600 "adrfam": "IPv4", 00:12:28.600 "traddr": "10.0.0.2", 00:12:28.600 "trsvcid": "4420" 00:12:28.600 }, 00:12:28.600 "peer_address": { 00:12:28.600 "trtype": "TCP", 00:12:28.600 "adrfam": "IPv4", 00:12:28.600 "traddr": "10.0.0.1", 00:12:28.600 "trsvcid": "34552" 00:12:28.600 }, 00:12:28.600 "auth": { 00:12:28.600 "state": "completed", 00:12:28.600 "digest": "sha512", 00:12:28.600 "dhgroup": "ffdhe6144" 00:12:28.600 } 00:12:28.600 } 00:12:28.600 ]' 00:12:28.600 22:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:28.600 22:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:28.600 22:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:28.857 22:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:28.857 22:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:28.857 22:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.857 22:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.857 22:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.857 22:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:02:MTgxNTdjYmM1MTY5ZmVmNWQ1MTU1YjZhZGRhZmVmYzY2MWVhMDYxMjdjNjhkNTJip2Pi4g==: --dhchap-ctrl-secret DHHC-1:01:YjllZTEyOTNlMzY3NGM4NGZkNzBhYzQ1NTU4YmI3YTCoVxs0: 00:12:29.789 22:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.789 22:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:29.789 22:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.789 22:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.789 22:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.789 22:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:29.789 22:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:29.789 22:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:29.789 22:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:12:29.789 22:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:29.789 22:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:29.789 22:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:29.789 22:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:29.789 22:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.789 22:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key3 00:12:29.789 22:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.789 22:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.789 22:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.789 22:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:29.789 22:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:30.047 00:12:30.303 22:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:30.303 22:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:30.303 22:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.303 22:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.303 22:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.303 22:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.303 22:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.303 22:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.303 22:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:30.303 { 00:12:30.303 "cntlid": 135, 00:12:30.303 "qid": 0, 00:12:30.303 "state": "enabled", 00:12:30.303 "thread": "nvmf_tgt_poll_group_000", 00:12:30.303 "listen_address": { 00:12:30.303 "trtype": "TCP", 00:12:30.303 "adrfam": "IPv4", 00:12:30.303 "traddr": "10.0.0.2", 00:12:30.303 "trsvcid": "4420" 00:12:30.303 }, 00:12:30.303 "peer_address": { 00:12:30.303 "trtype": "TCP", 00:12:30.303 "adrfam": "IPv4", 00:12:30.303 "traddr": "10.0.0.1", 00:12:30.303 "trsvcid": "34572" 00:12:30.303 }, 00:12:30.303 "auth": { 00:12:30.303 "state": "completed", 00:12:30.303 "digest": "sha512", 00:12:30.303 "dhgroup": "ffdhe6144" 00:12:30.303 } 00:12:30.303 } 00:12:30.303 ]' 00:12:30.303 22:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:30.303 22:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:30.303 22:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:30.561 22:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:30.561 22:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:30.561 22:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.561 22:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.561 22:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.818 22:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:03:ZjljNjViNmY5OGY2MzU1MzQ4YzZhMWRlMTY2NzhmNmY0MjkyYjgyMzMyYTY3MTQ1MDY1MjAyMTBlYzMzYTQxZaD2MKQ=: 00:12:31.441 22:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.441 22:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:31.441 22:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.441 22:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.441 22:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.441 22:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:31.441 22:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:31.441 22:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:31.441 22:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:31.441 22:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:12:31.441 22:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:31.441 22:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:31.441 22:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:31.441 22:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:31.441 22:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.441 22:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.441 22:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.441 22:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.441 22:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.441 22:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.441 22:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.006 00:12:32.006 22:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:32.006 22:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.006 22:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:32.264 22:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.264 22:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.264 22:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.264 22:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.264 22:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.264 22:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:32.264 { 00:12:32.264 "cntlid": 137, 00:12:32.264 "qid": 0, 00:12:32.264 "state": "enabled", 00:12:32.264 "thread": "nvmf_tgt_poll_group_000", 00:12:32.264 "listen_address": { 00:12:32.264 "trtype": "TCP", 00:12:32.264 "adrfam": "IPv4", 00:12:32.264 "traddr": "10.0.0.2", 00:12:32.264 "trsvcid": "4420" 00:12:32.264 }, 00:12:32.264 "peer_address": { 00:12:32.264 "trtype": "TCP", 00:12:32.264 "adrfam": "IPv4", 00:12:32.264 "traddr": "10.0.0.1", 00:12:32.264 "trsvcid": "34586" 00:12:32.264 }, 00:12:32.264 "auth": { 00:12:32.264 "state": "completed", 00:12:32.264 "digest": "sha512", 00:12:32.264 "dhgroup": "ffdhe8192" 00:12:32.264 } 00:12:32.264 } 00:12:32.264 ]' 00:12:32.264 22:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:32.264 22:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:32.264 22:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:32.264 22:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:32.264 22:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:32.264 22:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.264 22:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.264 22:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.521 22:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:00:ZTA5ZDUzM2YyZGYzNjY1OTdhYzNiNGVhYTIwZjRjOTkwYTI5M2M1NDNmNjY5YmFkLnACuw==: --dhchap-ctrl-secret DHHC-1:03:MDNiOWQ2ZDFjZTBmY2IxNDc1ZjlmMWFlZDRlNDVkNGNkNmU4NzM0ZThlMGM0NjZiODQwYTNjNTA1MDQxMDY1Npgk4os=: 00:12:33.085 22:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.085 22:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:33.085 22:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.085 22:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.085 22:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.085 22:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:33.085 22:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:33.085 22:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:33.341 22:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:12:33.341 22:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:33.341 22:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:33.341 22:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:33.341 22:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:33.341 22:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.341 22:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.341 22:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.341 22:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.341 22:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.341 22:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.341 22:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.904 00:12:33.904 22:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:33.904 22:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.904 22:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:34.161 22:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.161 22:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.161 22:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.161 22:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.161 22:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.161 22:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:34.162 { 00:12:34.162 "cntlid": 139, 00:12:34.162 "qid": 0, 00:12:34.162 "state": "enabled", 00:12:34.162 "thread": "nvmf_tgt_poll_group_000", 00:12:34.162 "listen_address": { 00:12:34.162 "trtype": "TCP", 00:12:34.162 "adrfam": "IPv4", 00:12:34.162 "traddr": "10.0.0.2", 00:12:34.162 "trsvcid": "4420" 00:12:34.162 }, 00:12:34.162 "peer_address": { 00:12:34.162 "trtype": "TCP", 00:12:34.162 "adrfam": "IPv4", 00:12:34.162 "traddr": "10.0.0.1", 00:12:34.162 "trsvcid": "34604" 00:12:34.162 }, 00:12:34.162 "auth": { 00:12:34.162 "state": "completed", 00:12:34.162 "digest": "sha512", 00:12:34.162 "dhgroup": "ffdhe8192" 00:12:34.162 } 00:12:34.162 } 00:12:34.162 ]' 00:12:34.162 22:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:34.162 22:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:34.162 22:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:34.162 22:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:34.162 22:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:34.162 22:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.162 22:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.162 22:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.420 22:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:01:NTk2Y2FlMmI3ODg3MzAxYTU2NjJhODQyODBjMmNiYTj5Z2bJ: --dhchap-ctrl-secret DHHC-1:02:ZWUzMmJlMzgyMGVjYzE5NTliMGRkYjI4MTBiZjU1MjIyYzdhNDc4NDA3YzJlNDY0Dg5wyw==: 00:12:34.988 22:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.988 22:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:34.988 22:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.988 22:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.988 22:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.988 22:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:34.988 22:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:34.988 22:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:35.246 22:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:12:35.246 22:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:35.246 22:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:35.246 22:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:35.246 22:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:35.246 22:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.246 22:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.246 22:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.246 22:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.246 22:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.246 22:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.246 22:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.811 00:12:35.811 22:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:35.811 22:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:35.811 22:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.071 22:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.071 22:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.071 22:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.071 22:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.071 22:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.071 22:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:36.071 { 00:12:36.071 "cntlid": 141, 00:12:36.071 "qid": 0, 00:12:36.071 "state": "enabled", 00:12:36.071 "thread": "nvmf_tgt_poll_group_000", 00:12:36.071 "listen_address": { 00:12:36.071 "trtype": "TCP", 00:12:36.071 "adrfam": "IPv4", 00:12:36.071 "traddr": "10.0.0.2", 00:12:36.071 "trsvcid": "4420" 00:12:36.071 }, 00:12:36.071 "peer_address": { 00:12:36.071 "trtype": "TCP", 00:12:36.071 "adrfam": "IPv4", 00:12:36.071 "traddr": "10.0.0.1", 00:12:36.071 "trsvcid": "34636" 00:12:36.071 }, 00:12:36.071 "auth": { 00:12:36.071 "state": "completed", 00:12:36.071 "digest": "sha512", 00:12:36.071 "dhgroup": "ffdhe8192" 00:12:36.071 } 00:12:36.071 } 00:12:36.071 ]' 00:12:36.071 22:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:36.071 22:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:36.071 22:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:36.071 22:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:36.071 22:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:36.071 22:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.071 22:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.071 22:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.329 22:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:02:MTgxNTdjYmM1MTY5ZmVmNWQ1MTU1YjZhZGRhZmVmYzY2MWVhMDYxMjdjNjhkNTJip2Pi4g==: --dhchap-ctrl-secret DHHC-1:01:YjllZTEyOTNlMzY3NGM4NGZkNzBhYzQ1NTU4YmI3YTCoVxs0: 00:12:36.898 22:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.898 22:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:36.898 22:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.898 22:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.898 22:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.898 22:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:36.898 22:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:36.898 22:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:37.157 22:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:12:37.157 22:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:37.157 22:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:37.157 22:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:37.157 22:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:37.157 22:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.157 22:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key3 00:12:37.157 22:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.158 22:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.158 22:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.158 22:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:37.158 22:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:37.724 00:12:37.724 22:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:37.724 22:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:37.724 22:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.006 22:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.006 22:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.006 22:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.006 22:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.006 22:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.006 22:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:38.006 { 00:12:38.006 "cntlid": 143, 00:12:38.006 "qid": 0, 00:12:38.006 "state": "enabled", 00:12:38.006 "thread": "nvmf_tgt_poll_group_000", 00:12:38.006 "listen_address": { 00:12:38.006 "trtype": "TCP", 00:12:38.006 "adrfam": "IPv4", 00:12:38.006 "traddr": "10.0.0.2", 00:12:38.006 "trsvcid": "4420" 00:12:38.006 }, 00:12:38.006 "peer_address": { 00:12:38.006 "trtype": "TCP", 00:12:38.006 "adrfam": "IPv4", 00:12:38.006 "traddr": "10.0.0.1", 00:12:38.006 "trsvcid": "34652" 00:12:38.006 }, 00:12:38.006 "auth": { 00:12:38.006 "state": "completed", 00:12:38.006 "digest": "sha512", 00:12:38.006 "dhgroup": "ffdhe8192" 00:12:38.006 } 00:12:38.006 } 00:12:38.006 ]' 00:12:38.006 22:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:38.006 22:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:38.006 22:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:38.006 22:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:38.006 22:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:38.006 22:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.006 22:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.006 22:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.278 22:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:03:ZjljNjViNmY5OGY2MzU1MzQ4YzZhMWRlMTY2NzhmNmY0MjkyYjgyMzMyYTY3MTQ1MDY1MjAyMTBlYzMzYTQxZaD2MKQ=: 00:12:38.840 22:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.840 22:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:38.840 22:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.840 22:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.840 22:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.840 22:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:12:38.840 22:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:12:38.840 22:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:12:38.840 22:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:38.840 22:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:38.840 22:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:39.097 22:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:12:39.097 22:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:39.097 22:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:39.097 22:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:39.097 22:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:39.097 22:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.097 22:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.097 22:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.097 22:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.097 22:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.097 22:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.097 22:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.661 00:12:39.661 22:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:39.661 22:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.661 22:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:39.918 22:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.918 22:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.918 22:23:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.918 22:23:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.918 22:23:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.919 22:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:39.919 { 00:12:39.919 "cntlid": 145, 00:12:39.919 "qid": 0, 00:12:39.919 "state": "enabled", 00:12:39.919 "thread": "nvmf_tgt_poll_group_000", 00:12:39.919 "listen_address": { 00:12:39.919 "trtype": "TCP", 00:12:39.919 "adrfam": "IPv4", 00:12:39.919 "traddr": "10.0.0.2", 00:12:39.919 "trsvcid": "4420" 00:12:39.919 }, 00:12:39.919 "peer_address": { 00:12:39.919 "trtype": "TCP", 00:12:39.919 "adrfam": "IPv4", 00:12:39.919 "traddr": "10.0.0.1", 00:12:39.919 "trsvcid": "53388" 00:12:39.919 }, 00:12:39.919 "auth": { 00:12:39.919 "state": "completed", 00:12:39.919 "digest": "sha512", 00:12:39.919 "dhgroup": "ffdhe8192" 00:12:39.919 } 00:12:39.919 } 00:12:39.919 ]' 00:12:39.919 22:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:39.919 22:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:39.919 22:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:39.919 22:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:39.919 22:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:39.919 22:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.919 22:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.919 22:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.176 22:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:00:ZTA5ZDUzM2YyZGYzNjY1OTdhYzNiNGVhYTIwZjRjOTkwYTI5M2M1NDNmNjY5YmFkLnACuw==: --dhchap-ctrl-secret DHHC-1:03:MDNiOWQ2ZDFjZTBmY2IxNDc1ZjlmMWFlZDRlNDVkNGNkNmU4NzM0ZThlMGM0NjZiODQwYTNjNTA1MDQxMDY1Npgk4os=: 00:12:40.751 22:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.751 22:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:40.751 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.751 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.751 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.751 22:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key1 00:12:40.751 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.751 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.751 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.751 22:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:40.751 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:40.751 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:40.751 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:40.751 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:40.751 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:40.751 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:40.751 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:40.751 22:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:41.366 request: 00:12:41.366 { 00:12:41.366 "name": "nvme0", 00:12:41.366 "trtype": "tcp", 00:12:41.366 "traddr": "10.0.0.2", 00:12:41.366 "adrfam": "ipv4", 00:12:41.366 "trsvcid": "4420", 00:12:41.366 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:41.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc", 00:12:41.366 "prchk_reftag": false, 00:12:41.366 "prchk_guard": false, 00:12:41.366 "hdgst": false, 00:12:41.366 "ddgst": false, 00:12:41.366 "dhchap_key": "key2", 00:12:41.366 "method": "bdev_nvme_attach_controller", 00:12:41.366 "req_id": 1 00:12:41.366 } 00:12:41.366 Got JSON-RPC error response 00:12:41.366 response: 00:12:41.366 { 00:12:41.366 "code": -5, 00:12:41.366 "message": "Input/output error" 00:12:41.366 } 00:12:41.366 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:41.366 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:41.366 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:41.366 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:41.366 22:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:41.366 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.366 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.366 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.366 22:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:41.366 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.366 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.366 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.366 22:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:41.366 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:41.366 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:41.366 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:41.366 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:41.366 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:41.366 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:41.366 22:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:41.366 22:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:41.622 request: 00:12:41.622 { 00:12:41.622 "name": "nvme0", 00:12:41.622 "trtype": "tcp", 00:12:41.622 "traddr": "10.0.0.2", 00:12:41.622 "adrfam": "ipv4", 00:12:41.622 "trsvcid": "4420", 00:12:41.622 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:41.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc", 00:12:41.622 "prchk_reftag": false, 00:12:41.622 "prchk_guard": false, 00:12:41.622 "hdgst": false, 00:12:41.622 "ddgst": false, 00:12:41.622 "dhchap_key": "key1", 00:12:41.622 "dhchap_ctrlr_key": "ckey2", 00:12:41.622 "method": "bdev_nvme_attach_controller", 00:12:41.622 "req_id": 1 00:12:41.622 } 00:12:41.622 Got JSON-RPC error response 00:12:41.622 response: 00:12:41.622 { 00:12:41.622 "code": -5, 00:12:41.622 "message": "Input/output error" 00:12:41.622 } 00:12:41.880 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:41.880 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:41.880 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:41.880 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:41.880 22:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:41.880 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.880 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.880 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.880 22:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key1 00:12:41.880 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.880 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.880 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.880 22:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:41.880 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:41.880 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:41.880 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:41.880 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:41.880 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:41.880 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:41.880 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:41.880 22:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.138 request: 00:12:42.138 { 00:12:42.138 "name": "nvme0", 00:12:42.138 "trtype": "tcp", 00:12:42.138 "traddr": "10.0.0.2", 00:12:42.138 "adrfam": "ipv4", 00:12:42.138 "trsvcid": "4420", 00:12:42.138 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:42.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc", 00:12:42.138 "prchk_reftag": false, 00:12:42.138 "prchk_guard": false, 00:12:42.138 "hdgst": false, 00:12:42.138 "ddgst": false, 00:12:42.138 "dhchap_key": "key1", 00:12:42.138 "dhchap_ctrlr_key": "ckey1", 00:12:42.138 "method": "bdev_nvme_attach_controller", 00:12:42.138 "req_id": 1 00:12:42.138 } 00:12:42.138 Got JSON-RPC error response 00:12:42.138 response: 00:12:42.138 { 00:12:42.138 "code": -5, 00:12:42.138 "message": "Input/output error" 00:12:42.138 } 00:12:42.138 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:42.138 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:42.138 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:42.138 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:42.138 22:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:42.138 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.138 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.138 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.138 22:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 69492 00:12:42.138 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69492 ']' 00:12:42.138 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69492 00:12:42.396 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:12:42.396 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:42.396 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69492 00:12:42.396 killing process with pid 69492 00:12:42.396 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:42.396 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:42.396 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69492' 00:12:42.396 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69492 00:12:42.396 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69492 00:12:42.396 22:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:42.396 22:23:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:42.396 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:42.396 22:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.396 22:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=72271 00:12:42.396 22:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 72271 00:12:42.396 22:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:42.396 22:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72271 ']' 00:12:42.396 22:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.396 22:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:42.396 22:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.396 22:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:42.396 22:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.331 22:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:43.331 22:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:43.331 22:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:43.331 22:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:43.331 22:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.332 22:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.332 22:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:43.332 22:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 72271 00:12:43.332 22:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72271 ']' 00:12:43.332 22:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.332 22:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:43.332 22:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.332 22:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:43.332 22:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.589 22:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:43.589 22:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:43.589 22:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:12:43.589 22:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.589 22:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.847 22:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.848 22:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:12:43.848 22:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:43.848 22:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:43.848 22:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:43.848 22:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:43.848 22:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.848 22:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key3 00:12:43.848 22:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.848 22:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.848 22:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.848 22:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:43.848 22:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:44.413 00:12:44.413 22:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:44.413 22:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:44.413 22:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.672 22:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.672 22:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.672 22:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.672 22:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.672 22:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.672 22:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:44.672 { 00:12:44.672 "cntlid": 1, 00:12:44.672 "qid": 0, 00:12:44.672 "state": "enabled", 00:12:44.672 "thread": "nvmf_tgt_poll_group_000", 00:12:44.672 "listen_address": { 00:12:44.672 "trtype": "TCP", 00:12:44.672 "adrfam": "IPv4", 00:12:44.672 "traddr": "10.0.0.2", 00:12:44.672 "trsvcid": "4420" 00:12:44.672 }, 00:12:44.672 "peer_address": { 00:12:44.672 "trtype": "TCP", 00:12:44.672 "adrfam": "IPv4", 00:12:44.672 "traddr": "10.0.0.1", 00:12:44.672 "trsvcid": "53438" 00:12:44.672 }, 00:12:44.672 "auth": { 00:12:44.672 "state": "completed", 00:12:44.672 "digest": "sha512", 00:12:44.672 "dhgroup": "ffdhe8192" 00:12:44.672 } 00:12:44.672 } 00:12:44.672 ]' 00:12:44.672 22:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:44.672 22:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:44.672 22:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:44.672 22:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:44.672 22:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:44.672 22:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.672 22:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.672 22:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.931 22:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid 37374fe9-a847-4b40-94af-b766955abedc --dhchap-secret DHHC-1:03:ZjljNjViNmY5OGY2MzU1MzQ4YzZhMWRlMTY2NzhmNmY0MjkyYjgyMzMyYTY3MTQ1MDY1MjAyMTBlYzMzYTQxZaD2MKQ=: 00:12:45.497 22:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.497 22:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:45.497 22:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.497 22:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.497 22:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.497 22:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --dhchap-key key3 00:12:45.497 22:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.497 22:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.497 22:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.497 22:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:45.497 22:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:45.756 22:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:45.756 22:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:45.756 22:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:45.756 22:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:45.756 22:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:45.756 22:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:45.756 22:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:45.756 22:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:45.756 22:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:46.014 request: 00:12:46.014 { 00:12:46.014 "name": "nvme0", 00:12:46.014 "trtype": "tcp", 00:12:46.014 "traddr": "10.0.0.2", 00:12:46.014 "adrfam": "ipv4", 00:12:46.014 "trsvcid": "4420", 00:12:46.014 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:46.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc", 00:12:46.014 "prchk_reftag": false, 00:12:46.014 "prchk_guard": false, 00:12:46.014 "hdgst": false, 00:12:46.014 "ddgst": false, 00:12:46.014 "dhchap_key": "key3", 00:12:46.014 "method": "bdev_nvme_attach_controller", 00:12:46.014 "req_id": 1 00:12:46.014 } 00:12:46.014 Got JSON-RPC error response 00:12:46.014 response: 00:12:46.014 { 00:12:46.014 "code": -5, 00:12:46.014 "message": "Input/output error" 00:12:46.014 } 00:12:46.014 22:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:46.014 22:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:46.014 22:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:46.014 22:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:46.014 22:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:12:46.014 22:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:12:46.014 22:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:46.014 22:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:46.274 22:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:46.274 22:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:46.274 22:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:46.274 22:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:46.274 22:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:46.274 22:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:46.274 22:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:46.274 22:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:46.274 22:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:46.274 request: 00:12:46.274 { 00:12:46.274 "name": "nvme0", 00:12:46.274 "trtype": "tcp", 00:12:46.274 "traddr": "10.0.0.2", 00:12:46.274 "adrfam": "ipv4", 00:12:46.274 "trsvcid": "4420", 00:12:46.274 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:46.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc", 00:12:46.274 "prchk_reftag": false, 00:12:46.274 "prchk_guard": false, 00:12:46.274 "hdgst": false, 00:12:46.274 "ddgst": false, 00:12:46.274 "dhchap_key": "key3", 00:12:46.274 "method": "bdev_nvme_attach_controller", 00:12:46.274 "req_id": 1 00:12:46.274 } 00:12:46.274 Got JSON-RPC error response 00:12:46.274 response: 00:12:46.274 { 00:12:46.274 "code": -5, 00:12:46.274 "message": "Input/output error" 00:12:46.274 } 00:12:46.274 22:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:46.274 22:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:46.274 22:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:46.274 22:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:46.274 22:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:12:46.274 22:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:12:46.274 22:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:12:46.274 22:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:46.274 22:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:46.274 22:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:46.533 22:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:46.533 22:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.533 22:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.534 22:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.534 22:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:46.534 22:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.534 22:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.534 22:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.534 22:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:46.534 22:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:46.534 22:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:46.534 22:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:46.534 22:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:46.534 22:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:46.534 22:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:46.534 22:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:46.534 22:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:46.793 request: 00:12:46.793 { 00:12:46.793 "name": "nvme0", 00:12:46.793 "trtype": "tcp", 00:12:46.793 "traddr": "10.0.0.2", 00:12:46.793 "adrfam": "ipv4", 00:12:46.793 "trsvcid": "4420", 00:12:46.793 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:46.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc", 00:12:46.793 "prchk_reftag": false, 00:12:46.793 "prchk_guard": false, 00:12:46.793 "hdgst": false, 00:12:46.793 "ddgst": false, 00:12:46.793 "dhchap_key": "key0", 00:12:46.793 "dhchap_ctrlr_key": "key1", 00:12:46.793 "method": "bdev_nvme_attach_controller", 00:12:46.793 "req_id": 1 00:12:46.793 } 00:12:46.793 Got JSON-RPC error response 00:12:46.793 response: 00:12:46.793 { 00:12:46.793 "code": -5, 00:12:46.793 "message": "Input/output error" 00:12:46.793 } 00:12:46.793 22:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:46.793 22:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:46.793 22:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:46.793 22:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:46.793 22:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:12:46.793 22:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:12:47.051 00:12:47.051 22:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:12:47.051 22:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.051 22:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:12:47.308 22:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.308 22:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.308 22:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.565 22:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:12:47.565 22:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:12:47.565 22:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 69530 00:12:47.565 22:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69530 ']' 00:12:47.565 22:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69530 00:12:47.566 22:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:12:47.566 22:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:47.566 22:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69530 00:12:47.566 killing process with pid 69530 00:12:47.566 22:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:47.566 22:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:47.566 22:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69530' 00:12:47.566 22:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69530 00:12:47.566 22:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69530 00:12:47.851 22:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:12:47.851 22:24:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:47.851 22:24:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:12:47.851 22:24:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:47.851 22:24:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:12:47.851 22:24:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:47.851 22:24:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:47.851 rmmod nvme_tcp 00:12:47.851 rmmod nvme_fabrics 00:12:47.851 rmmod nvme_keyring 00:12:48.120 22:24:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:48.120 22:24:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:12:48.120 22:24:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:12:48.120 22:24:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 72271 ']' 00:12:48.120 22:24:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 72271 00:12:48.120 22:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 72271 ']' 00:12:48.120 22:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 72271 00:12:48.120 22:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:12:48.120 22:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:48.120 22:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72271 00:12:48.120 22:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:48.120 22:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:48.120 22:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72271' 00:12:48.120 killing process with pid 72271 00:12:48.120 22:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 72271 00:12:48.120 22:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 72271 00:12:48.120 22:24:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:48.120 22:24:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:48.120 22:24:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:48.120 22:24:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:48.120 22:24:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:48.120 22:24:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.120 22:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.120 22:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.120 22:24:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:48.377 22:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.pgd /tmp/spdk.key-sha256.STR /tmp/spdk.key-sha384.32e /tmp/spdk.key-sha512.zjO /tmp/spdk.key-sha512.1ty /tmp/spdk.key-sha384.FUf /tmp/spdk.key-sha256.UgK '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:12:48.377 00:12:48.377 real 2m22.669s 00:12:48.377 user 5m29.703s 00:12:48.377 sys 0m30.112s 00:12:48.377 22:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:48.378 22:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.378 ************************************ 00:12:48.378 END TEST nvmf_auth_target 00:12:48.378 ************************************ 00:12:48.378 22:24:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:48.378 22:24:01 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:12:48.378 22:24:01 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:48.378 22:24:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:48.378 22:24:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:48.378 22:24:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:48.378 ************************************ 00:12:48.378 START TEST nvmf_bdevio_no_huge 00:12:48.378 ************************************ 00:12:48.378 22:24:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:48.378 * Looking for test storage... 00:12:48.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:48.378 22:24:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:48.378 22:24:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:12:48.378 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.378 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.378 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.378 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.378 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.378 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.378 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.378 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.378 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.378 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:48.636 Cannot find device "nvmf_tgt_br" 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:48.636 Cannot find device "nvmf_tgt_br2" 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:48.636 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:48.637 Cannot find device "nvmf_tgt_br" 00:12:48.637 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:12:48.637 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:48.637 Cannot find device "nvmf_tgt_br2" 00:12:48.637 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:12:48.637 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:48.637 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:48.637 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:48.637 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:48.637 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:12:48.637 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:48.637 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:48.637 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:12:48.637 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:48.637 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:48.637 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:48.637 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:48.637 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:48.637 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:48.637 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:48.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:12:48.896 00:12:48.896 --- 10.0.0.2 ping statistics --- 00:12:48.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.896 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:48.896 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:48.896 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:12:48.896 00:12:48.896 --- 10.0.0.3 ping statistics --- 00:12:48.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.896 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:48.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:12:48.896 00:12:48.896 --- 10.0.0.1 ping statistics --- 00:12:48.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.896 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=72574 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 72574 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 72574 ']' 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:48.896 22:24:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:48.896 [2024-07-15 22:24:02.490312] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:12:48.896 [2024-07-15 22:24:02.490383] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:49.154 [2024-07-15 22:24:02.625667] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:49.154 [2024-07-15 22:24:02.738374] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.154 [2024-07-15 22:24:02.738428] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.155 [2024-07-15 22:24:02.738438] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.155 [2024-07-15 22:24:02.738446] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.155 [2024-07-15 22:24:02.738453] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.155 [2024-07-15 22:24:02.738680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:49.155 [2024-07-15 22:24:02.738864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:49.155 [2024-07-15 22:24:02.738907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:49.155 [2024-07-15 22:24:02.738989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.155 [2024-07-15 22:24:02.760875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:49.720 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:49.720 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:12:49.720 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:49.720 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:49.720 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:49.979 [2024-07-15 22:24:03.372927] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:49.979 Malloc0 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:49.979 [2024-07-15 22:24:03.424973] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:49.979 { 00:12:49.979 "params": { 00:12:49.979 "name": "Nvme$subsystem", 00:12:49.979 "trtype": "$TEST_TRANSPORT", 00:12:49.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:49.979 "adrfam": "ipv4", 00:12:49.979 "trsvcid": "$NVMF_PORT", 00:12:49.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:49.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:49.979 "hdgst": ${hdgst:-false}, 00:12:49.979 "ddgst": ${ddgst:-false} 00:12:49.979 }, 00:12:49.979 "method": "bdev_nvme_attach_controller" 00:12:49.979 } 00:12:49.979 EOF 00:12:49.979 )") 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:12:49.979 22:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:49.979 "params": { 00:12:49.979 "name": "Nvme1", 00:12:49.979 "trtype": "tcp", 00:12:49.979 "traddr": "10.0.0.2", 00:12:49.979 "adrfam": "ipv4", 00:12:49.979 "trsvcid": "4420", 00:12:49.979 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:49.979 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:49.979 "hdgst": false, 00:12:49.979 "ddgst": false 00:12:49.979 }, 00:12:49.979 "method": "bdev_nvme_attach_controller" 00:12:49.979 }' 00:12:49.979 [2024-07-15 22:24:03.474835] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:12:49.979 [2024-07-15 22:24:03.474933] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid72610 ] 00:12:50.237 [2024-07-15 22:24:03.621905] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:50.237 [2024-07-15 22:24:03.747515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.237 [2024-07-15 22:24:03.747690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.237 [2024-07-15 22:24:03.747694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:50.237 [2024-07-15 22:24:03.760180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:50.494 I/O targets: 00:12:50.495 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:50.495 00:12:50.495 00:12:50.495 CUnit - A unit testing framework for C - Version 2.1-3 00:12:50.495 http://cunit.sourceforge.net/ 00:12:50.495 00:12:50.495 00:12:50.495 Suite: bdevio tests on: Nvme1n1 00:12:50.495 Test: blockdev write read block ...passed 00:12:50.495 Test: blockdev write zeroes read block ...passed 00:12:50.495 Test: blockdev write zeroes read no split ...passed 00:12:50.495 Test: blockdev write zeroes read split ...passed 00:12:50.495 Test: blockdev write zeroes read split partial ...passed 00:12:50.495 Test: blockdev reset ...[2024-07-15 22:24:03.936004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:50.495 [2024-07-15 22:24:03.936086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2da10 (9): Bad file descriptor 00:12:50.495 [2024-07-15 22:24:03.955774] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:50.495 passed 00:12:50.495 Test: blockdev write read 8 blocks ...passed 00:12:50.495 Test: blockdev write read size > 128k ...passed 00:12:50.495 Test: blockdev write read invalid size ...passed 00:12:50.495 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:50.495 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:50.495 Test: blockdev write read max offset ...passed 00:12:50.495 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:50.495 Test: blockdev writev readv 8 blocks ...passed 00:12:50.495 Test: blockdev writev readv 30 x 1block ...passed 00:12:50.495 Test: blockdev writev readv block ...passed 00:12:50.495 Test: blockdev writev readv size > 128k ...passed 00:12:50.495 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:50.495 Test: blockdev comparev and writev ...[2024-07-15 22:24:03.962057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:50.495 [2024-07-15 22:24:03.962098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:50.495 [2024-07-15 22:24:03.962115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:50.495 [2024-07-15 22:24:03.962125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:50.495 [2024-07-15 22:24:03.962410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:50.495 [2024-07-15 22:24:03.962435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:50.495 [2024-07-15 22:24:03.962448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:50.495 [2024-07-15 22:24:03.962457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:50.495 [2024-07-15 22:24:03.962818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:50.495 [2024-07-15 22:24:03.962836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:50.495 [2024-07-15 22:24:03.962849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:50.495 [2024-07-15 22:24:03.962858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:50.495 [2024-07-15 22:24:03.963083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:50.495 [2024-07-15 22:24:03.963094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:50.495 [2024-07-15 22:24:03.963107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:50.495 [2024-07-15 22:24:03.963115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:50.495 passed 00:12:50.495 Test: blockdev nvme passthru rw ...passed 00:12:50.495 Test: blockdev nvme passthru vendor specific ...[2024-07-15 22:24:03.964266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:50.495 [2024-07-15 22:24:03.964287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:50.495 [2024-07-15 22:24:03.964366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:50.495 [2024-07-15 22:24:03.964376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:50.495 [2024-07-15 22:24:03.964457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:50.495 [2024-07-15 22:24:03.964468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:50.495 [2024-07-15 22:24:03.964539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:50.495 [2024-07-15 22:24:03.964550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:50.495 passed 00:12:50.495 Test: blockdev nvme admin passthru ...passed 00:12:50.495 Test: blockdev copy ...passed 00:12:50.495 00:12:50.495 Run Summary: Type Total Ran Passed Failed Inactive 00:12:50.495 suites 1 1 n/a 0 0 00:12:50.495 tests 23 23 23 0 0 00:12:50.495 asserts 152 152 152 0 n/a 00:12:50.495 00:12:50.495 Elapsed time = 0.146 seconds 00:12:50.753 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:50.753 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.753 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:50.753 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.753 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:50.753 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:12:50.753 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:50.753 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:12:50.753 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:50.753 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:12:50.753 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:50.753 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:50.753 rmmod nvme_tcp 00:12:51.011 rmmod nvme_fabrics 00:12:51.011 rmmod nvme_keyring 00:12:51.011 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:51.011 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:12:51.011 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:12:51.011 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 72574 ']' 00:12:51.011 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 72574 00:12:51.011 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 72574 ']' 00:12:51.011 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 72574 00:12:51.011 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:12:51.011 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:51.011 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72574 00:12:51.011 killing process with pid 72574 00:12:51.011 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:12:51.012 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:12:51.012 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72574' 00:12:51.012 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 72574 00:12:51.012 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 72574 00:12:51.270 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:51.270 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:51.270 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:51.270 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:51.270 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:51.270 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.270 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:51.270 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.528 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:51.528 00:12:51.528 real 0m3.054s 00:12:51.528 user 0m9.430s 00:12:51.528 sys 0m1.357s 00:12:51.528 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:51.528 22:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:51.528 ************************************ 00:12:51.528 END TEST nvmf_bdevio_no_huge 00:12:51.528 ************************************ 00:12:51.528 22:24:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:51.528 22:24:04 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:51.528 22:24:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:51.528 22:24:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:51.528 22:24:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:51.528 ************************************ 00:12:51.528 START TEST nvmf_tls 00:12:51.528 ************************************ 00:12:51.528 22:24:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:51.528 * Looking for test storage... 00:12:51.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:51.528 22:24:05 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:51.528 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:12:51.528 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.528 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.528 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.528 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.528 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.528 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.528 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.528 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.528 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.528 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.528 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:12:51.528 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:51.529 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:51.788 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:51.788 Cannot find device "nvmf_tgt_br" 00:12:51.788 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:12:51.788 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:51.788 Cannot find device "nvmf_tgt_br2" 00:12:51.788 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:12:51.788 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:51.788 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:51.788 Cannot find device "nvmf_tgt_br" 00:12:51.788 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:12:51.788 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:51.788 Cannot find device "nvmf_tgt_br2" 00:12:51.788 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:12:51.788 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:51.788 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:51.788 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:51.788 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:51.788 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:12:51.788 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:51.788 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:51.788 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:12:51.788 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:51.788 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:51.788 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:51.788 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:51.788 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:51.788 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:52.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:12:52.047 00:12:52.047 --- 10.0.0.2 ping statistics --- 00:12:52.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.047 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:52.047 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:52.047 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:12:52.047 00:12:52.047 --- 10.0.0.3 ping statistics --- 00:12:52.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.047 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:52.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:52.047 00:12:52.047 --- 10.0.0.1 ping statistics --- 00:12:52.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.047 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72797 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72797 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72797 ']' 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.047 22:24:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:52.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.048 22:24:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.048 22:24:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:52.048 22:24:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:52.048 [2024-07-15 22:24:05.648819] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:12:52.048 [2024-07-15 22:24:05.648889] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.309 [2024-07-15 22:24:05.791622] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.309 [2024-07-15 22:24:05.880405] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.309 [2024-07-15 22:24:05.880447] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.309 [2024-07-15 22:24:05.880456] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.309 [2024-07-15 22:24:05.880465] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.309 [2024-07-15 22:24:05.880471] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.309 [2024-07-15 22:24:05.880500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.875 22:24:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:52.875 22:24:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:52.875 22:24:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:52.875 22:24:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:52.875 22:24:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:53.133 22:24:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.133 22:24:06 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:12:53.133 22:24:06 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:12:53.133 true 00:12:53.133 22:24:06 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:53.133 22:24:06 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:12:53.390 22:24:06 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:12:53.390 22:24:06 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:12:53.390 22:24:06 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:53.649 22:24:07 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:12:53.649 22:24:07 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:53.649 22:24:07 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:12:53.649 22:24:07 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:12:53.649 22:24:07 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:12:53.931 22:24:07 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:53.931 22:24:07 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:12:54.206 22:24:07 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:12:54.207 22:24:07 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:12:54.207 22:24:07 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:12:54.207 22:24:07 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:54.464 22:24:07 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:12:54.464 22:24:07 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:12:54.464 22:24:07 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:12:54.464 22:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:54.464 22:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:12:54.723 22:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:12:54.723 22:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:12:54.723 22:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:12:54.981 22:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:54.981 22:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:12:54.981 22:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:12:54.981 22:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:12:54.981 22:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:12:54.982 22:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:12:54.982 22:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:54.982 22:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:54.982 22:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:12:54.982 22:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:12:54.982 22:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:55.240 22:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:55.240 22:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:12:55.240 22:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:12:55.240 22:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:55.240 22:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:55.240 22:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:12:55.240 22:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:12:55.240 22:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:55.240 22:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:55.240 22:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:12:55.240 22:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.StnTv3Po7b 00:12:55.240 22:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:12:55.240 22:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.Z0drO0PlUc 00:12:55.240 22:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:55.240 22:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:55.240 22:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.StnTv3Po7b 00:12:55.240 22:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Z0drO0PlUc 00:12:55.240 22:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:55.499 22:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:55.757 [2024-07-15 22:24:09.136140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:55.757 22:24:09 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.StnTv3Po7b 00:12:55.757 22:24:09 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.StnTv3Po7b 00:12:55.757 22:24:09 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:55.757 [2024-07-15 22:24:09.362798] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.757 22:24:09 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:56.015 22:24:09 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:56.273 [2024-07-15 22:24:09.698333] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:56.273 [2024-07-15 22:24:09.698509] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.273 22:24:09 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:56.273 malloc0 00:12:56.531 22:24:09 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:56.531 22:24:10 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.StnTv3Po7b 00:12:56.790 [2024-07-15 22:24:10.278087] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:12:56.790 22:24:10 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.StnTv3Po7b 00:13:08.993 Initializing NVMe Controllers 00:13:08.993 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:08.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:08.993 Initialization complete. Launching workers. 00:13:08.993 ======================================================== 00:13:08.993 Latency(us) 00:13:08.993 Device Information : IOPS MiB/s Average min max 00:13:08.993 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14984.89 58.53 4271.48 852.38 5086.88 00:13:08.993 ======================================================== 00:13:08.993 Total : 14984.89 58.53 4271.48 852.38 5086.88 00:13:08.993 00:13:08.993 22:24:20 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.StnTv3Po7b 00:13:08.993 22:24:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:08.993 22:24:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:08.993 22:24:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:08.993 22:24:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.StnTv3Po7b' 00:13:08.993 22:24:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:08.993 22:24:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73017 00:13:08.993 22:24:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:08.993 22:24:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:08.993 22:24:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73017 /var/tmp/bdevperf.sock 00:13:08.993 22:24:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73017 ']' 00:13:08.993 22:24:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:08.993 22:24:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:08.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:08.993 22:24:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:08.993 22:24:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:08.993 22:24:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:08.993 [2024-07-15 22:24:20.525560] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:13:08.993 [2024-07-15 22:24:20.525640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73017 ] 00:13:08.993 [2024-07-15 22:24:20.667517] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.993 [2024-07-15 22:24:20.761818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:08.993 [2024-07-15 22:24:20.804058] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:08.993 22:24:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:08.993 22:24:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:08.993 22:24:21 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.StnTv3Po7b 00:13:08.993 [2024-07-15 22:24:21.593713] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:08.993 [2024-07-15 22:24:21.593825] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:08.993 TLSTESTn1 00:13:08.993 22:24:21 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:08.993 Running I/O for 10 seconds... 00:13:18.996 00:13:18.996 Latency(us) 00:13:18.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.996 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:18.996 Verification LBA range: start 0x0 length 0x2000 00:13:18.996 TLSTESTn1 : 10.01 5529.52 21.60 0.00 0.00 23114.74 3579.48 18213.22 00:13:18.996 =================================================================================================================== 00:13:18.996 Total : 5529.52 21.60 0.00 0.00 23114.74 3579.48 18213.22 00:13:18.996 0 00:13:18.996 22:24:31 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:18.996 22:24:31 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73017 00:13:18.996 22:24:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73017 ']' 00:13:18.996 22:24:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73017 00:13:18.996 22:24:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:18.996 22:24:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:18.996 22:24:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73017 00:13:18.996 killing process with pid 73017 00:13:18.996 Received shutdown signal, test time was about 10.000000 seconds 00:13:18.996 00:13:18.996 Latency(us) 00:13:18.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.996 =================================================================================================================== 00:13:18.996 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:18.996 22:24:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:18.996 22:24:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:18.996 22:24:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73017' 00:13:18.996 22:24:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73017 00:13:18.996 [2024-07-15 22:24:31.845272] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:18.996 22:24:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73017 00:13:18.996 22:24:32 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z0drO0PlUc 00:13:18.996 22:24:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:18.996 22:24:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z0drO0PlUc 00:13:18.996 22:24:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:18.996 22:24:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:18.996 22:24:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:18.996 22:24:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:18.996 22:24:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z0drO0PlUc 00:13:18.996 22:24:32 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:18.996 22:24:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:18.996 22:24:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:18.996 22:24:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Z0drO0PlUc' 00:13:18.996 22:24:32 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:18.997 22:24:32 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73145 00:13:18.997 22:24:32 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:18.997 22:24:32 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:18.997 22:24:32 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73145 /var/tmp/bdevperf.sock 00:13:18.997 22:24:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73145 ']' 00:13:18.997 22:24:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:18.997 22:24:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:18.997 22:24:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:18.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:18.997 22:24:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:18.997 22:24:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:18.997 [2024-07-15 22:24:32.082966] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:13:18.997 [2024-07-15 22:24:32.083035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73145 ] 00:13:18.997 [2024-07-15 22:24:32.213782] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.997 [2024-07-15 22:24:32.308929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:18.997 [2024-07-15 22:24:32.351222] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:19.562 22:24:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:19.562 22:24:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:19.562 22:24:32 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Z0drO0PlUc 00:13:19.562 [2024-07-15 22:24:33.098333] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:19.562 [2024-07-15 22:24:33.098448] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:19.562 [2024-07-15 22:24:33.108722] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:19.562 [2024-07-15 22:24:33.108763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c83d0 (107): Transport endpoint is not connected 00:13:19.562 [2024-07-15 22:24:33.109750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c83d0 (9): Bad file descriptor 00:13:19.562 [2024-07-15 22:24:33.110745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:19.562 [2024-07-15 22:24:33.110771] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:19.562 [2024-07-15 22:24:33.110785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:19.562 request: 00:13:19.562 { 00:13:19.562 "name": "TLSTEST", 00:13:19.562 "trtype": "tcp", 00:13:19.562 "traddr": "10.0.0.2", 00:13:19.562 "adrfam": "ipv4", 00:13:19.562 "trsvcid": "4420", 00:13:19.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:19.562 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:19.562 "prchk_reftag": false, 00:13:19.562 "prchk_guard": false, 00:13:19.562 "hdgst": false, 00:13:19.562 "ddgst": false, 00:13:19.562 "psk": "/tmp/tmp.Z0drO0PlUc", 00:13:19.562 "method": "bdev_nvme_attach_controller", 00:13:19.562 "req_id": 1 00:13:19.562 } 00:13:19.562 Got JSON-RPC error response 00:13:19.562 response: 00:13:19.562 { 00:13:19.562 "code": -5, 00:13:19.562 "message": "Input/output error" 00:13:19.562 } 00:13:19.562 22:24:33 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73145 00:13:19.562 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73145 ']' 00:13:19.562 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73145 00:13:19.562 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:19.562 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:19.562 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73145 00:13:19.562 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:19.562 killing process with pid 73145 00:13:19.562 Received shutdown signal, test time was about 10.000000 seconds 00:13:19.562 00:13:19.562 Latency(us) 00:13:19.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.562 =================================================================================================================== 00:13:19.562 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:19.562 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:19.562 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73145' 00:13:19.562 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73145 00:13:19.562 [2024-07-15 22:24:33.179038] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:19.562 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73145 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.StnTv3Po7b 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.StnTv3Po7b 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.StnTv3Po7b 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.StnTv3Po7b' 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73171 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73171 /var/tmp/bdevperf.sock 00:13:19.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73171 ']' 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:19.819 22:24:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:19.819 [2024-07-15 22:24:33.411296] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:13:19.819 [2024-07-15 22:24:33.411481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73171 ] 00:13:20.076 [2024-07-15 22:24:33.553829] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.076 [2024-07-15 22:24:33.647854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.076 [2024-07-15 22:24:33.690199] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:21.008 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:21.008 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:21.008 22:24:34 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.StnTv3Po7b 00:13:21.008 [2024-07-15 22:24:34.471747] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:21.008 [2024-07-15 22:24:34.471856] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:21.008 [2024-07-15 22:24:34.476532] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:21.008 [2024-07-15 22:24:34.476569] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:21.008 [2024-07-15 22:24:34.476627] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:21.008 [2024-07-15 22:24:34.477170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207a3d0 (107): Transport endpoint is not connected 00:13:21.008 [2024-07-15 22:24:34.478155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207a3d0 (9): Bad file descriptor 00:13:21.008 [2024-07-15 22:24:34.479151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:21.008 [2024-07-15 22:24:34.479173] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:21.008 [2024-07-15 22:24:34.479187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:21.008 request: 00:13:21.008 { 00:13:21.008 "name": "TLSTEST", 00:13:21.008 "trtype": "tcp", 00:13:21.008 "traddr": "10.0.0.2", 00:13:21.008 "adrfam": "ipv4", 00:13:21.008 "trsvcid": "4420", 00:13:21.008 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:21.008 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:21.008 "prchk_reftag": false, 00:13:21.008 "prchk_guard": false, 00:13:21.008 "hdgst": false, 00:13:21.008 "ddgst": false, 00:13:21.008 "psk": "/tmp/tmp.StnTv3Po7b", 00:13:21.008 "method": "bdev_nvme_attach_controller", 00:13:21.008 "req_id": 1 00:13:21.008 } 00:13:21.008 Got JSON-RPC error response 00:13:21.008 response: 00:13:21.008 { 00:13:21.008 "code": -5, 00:13:21.008 "message": "Input/output error" 00:13:21.008 } 00:13:21.008 22:24:34 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73171 00:13:21.008 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73171 ']' 00:13:21.008 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73171 00:13:21.008 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:21.008 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:21.008 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73171 00:13:21.008 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:21.008 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:21.008 killing process with pid 73171 00:13:21.008 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73171' 00:13:21.008 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73171 00:13:21.008 Received shutdown signal, test time was about 10.000000 seconds 00:13:21.008 00:13:21.008 Latency(us) 00:13:21.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:21.008 =================================================================================================================== 00:13:21.008 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:21.008 [2024-07-15 22:24:34.530701] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:21.008 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73171 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.StnTv3Po7b 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.StnTv3Po7b 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.StnTv3Po7b 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.StnTv3Po7b' 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73200 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73200 /var/tmp/bdevperf.sock 00:13:21.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73200 ']' 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:21.266 22:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:21.266 [2024-07-15 22:24:34.768873] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:13:21.266 [2024-07-15 22:24:34.768934] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73200 ] 00:13:21.266 [2024-07-15 22:24:34.898278] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.524 [2024-07-15 22:24:34.985653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:21.524 [2024-07-15 22:24:35.027958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:22.090 22:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:22.090 22:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:22.090 22:24:35 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.StnTv3Po7b 00:13:22.350 [2024-07-15 22:24:35.889868] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:22.350 [2024-07-15 22:24:35.890216] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:22.350 [2024-07-15 22:24:35.894829] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:22.350 [2024-07-15 22:24:35.895008] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:22.350 [2024-07-15 22:24:35.895137] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:22.350 [2024-07-15 22:24:35.895587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac33d0 (107): Transport endpoint is not connected 00:13:22.350 [2024-07-15 22:24:35.896576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac33d0 (9): Bad file descriptor 00:13:22.350 [2024-07-15 22:24:35.897571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:13:22.350 [2024-07-15 22:24:35.897707] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:22.350 [2024-07-15 22:24:35.897788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:13:22.350 request: 00:13:22.350 { 00:13:22.350 "name": "TLSTEST", 00:13:22.350 "trtype": "tcp", 00:13:22.350 "traddr": "10.0.0.2", 00:13:22.350 "adrfam": "ipv4", 00:13:22.350 "trsvcid": "4420", 00:13:22.350 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:22.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:22.350 "prchk_reftag": false, 00:13:22.350 "prchk_guard": false, 00:13:22.350 "hdgst": false, 00:13:22.350 "ddgst": false, 00:13:22.350 "psk": "/tmp/tmp.StnTv3Po7b", 00:13:22.350 "method": "bdev_nvme_attach_controller", 00:13:22.350 "req_id": 1 00:13:22.350 } 00:13:22.350 Got JSON-RPC error response 00:13:22.350 response: 00:13:22.350 { 00:13:22.350 "code": -5, 00:13:22.350 "message": "Input/output error" 00:13:22.350 } 00:13:22.350 22:24:35 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73200 00:13:22.350 22:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73200 ']' 00:13:22.350 22:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73200 00:13:22.350 22:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:22.351 22:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:22.351 22:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73200 00:13:22.351 killing process with pid 73200 00:13:22.351 Received shutdown signal, test time was about 10.000000 seconds 00:13:22.351 00:13:22.351 Latency(us) 00:13:22.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.351 =================================================================================================================== 00:13:22.351 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:22.351 22:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:22.351 22:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:22.351 22:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73200' 00:13:22.351 22:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73200 00:13:22.351 [2024-07-15 22:24:35.951347] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:22.351 22:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73200 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73224 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73224 /var/tmp/bdevperf.sock 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73224 ']' 00:13:22.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.609 22:24:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:22.609 [2024-07-15 22:24:36.189421] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:13:22.609 [2024-07-15 22:24:36.189630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73224 ] 00:13:22.867 [2024-07-15 22:24:36.329792] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.867 [2024-07-15 22:24:36.421708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.867 [2024-07-15 22:24:36.464028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:23.434 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.434 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:23.434 22:24:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:23.692 [2024-07-15 22:24:37.222371] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:23.692 [2024-07-15 22:24:37.223913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163cda0 (9): Bad file descriptor 00:13:23.692 [2024-07-15 22:24:37.224907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:23.692 [2024-07-15 22:24:37.224925] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:23.692 [2024-07-15 22:24:37.224938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:23.692 request: 00:13:23.692 { 00:13:23.692 "name": "TLSTEST", 00:13:23.692 "trtype": "tcp", 00:13:23.692 "traddr": "10.0.0.2", 00:13:23.692 "adrfam": "ipv4", 00:13:23.692 "trsvcid": "4420", 00:13:23.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:23.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:23.692 "prchk_reftag": false, 00:13:23.692 "prchk_guard": false, 00:13:23.692 "hdgst": false, 00:13:23.692 "ddgst": false, 00:13:23.692 "method": "bdev_nvme_attach_controller", 00:13:23.692 "req_id": 1 00:13:23.692 } 00:13:23.692 Got JSON-RPC error response 00:13:23.692 response: 00:13:23.692 { 00:13:23.692 "code": -5, 00:13:23.692 "message": "Input/output error" 00:13:23.692 } 00:13:23.692 22:24:37 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73224 00:13:23.692 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73224 ']' 00:13:23.692 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73224 00:13:23.692 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:23.692 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:23.692 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73224 00:13:23.692 killing process with pid 73224 00:13:23.692 Received shutdown signal, test time was about 10.000000 seconds 00:13:23.692 00:13:23.692 Latency(us) 00:13:23.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.692 =================================================================================================================== 00:13:23.692 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:23.692 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:23.692 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:23.692 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73224' 00:13:23.692 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73224 00:13:23.692 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73224 00:13:23.950 22:24:37 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:23.950 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:23.950 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:23.950 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:23.950 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:23.950 22:24:37 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 72797 00:13:23.950 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72797 ']' 00:13:23.950 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72797 00:13:23.950 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:23.950 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:23.950 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72797 00:13:23.950 killing process with pid 72797 00:13:23.950 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:23.950 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:23.950 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72797' 00:13:23.950 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72797 00:13:23.950 [2024-07-15 22:24:37.500560] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:23.950 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72797 00:13:24.246 22:24:37 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:24.246 22:24:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:24.246 22:24:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:24.246 22:24:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:24.246 22:24:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:24.246 22:24:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:13:24.246 22:24:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:24.246 22:24:37 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:24.246 22:24:37 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:13:24.246 22:24:37 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.eXtlaEkaqG 00:13:24.246 22:24:37 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:24.246 22:24:37 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.eXtlaEkaqG 00:13:24.246 22:24:37 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:13:24.246 22:24:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:24.246 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:24.246 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:24.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.246 22:24:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73262 00:13:24.246 22:24:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73262 00:13:24.246 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73262 ']' 00:13:24.246 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.246 22:24:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:24.246 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:24.246 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.246 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:24.246 22:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:24.246 [2024-07-15 22:24:37.815339] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:13:24.247 [2024-07-15 22:24:37.815403] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.505 [2024-07-15 22:24:37.959225] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.505 [2024-07-15 22:24:38.043457] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.505 [2024-07-15 22:24:38.043502] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.505 [2024-07-15 22:24:38.043511] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.505 [2024-07-15 22:24:38.043519] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.505 [2024-07-15 22:24:38.043526] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.505 [2024-07-15 22:24:38.043550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.505 [2024-07-15 22:24:38.084374] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:25.072 22:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:25.072 22:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:25.072 22:24:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:25.072 22:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:25.072 22:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:25.330 22:24:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.330 22:24:38 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.eXtlaEkaqG 00:13:25.330 22:24:38 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.eXtlaEkaqG 00:13:25.330 22:24:38 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:25.330 [2024-07-15 22:24:38.877271] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.330 22:24:38 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:25.589 22:24:39 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:25.849 [2024-07-15 22:24:39.244706] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:25.849 [2024-07-15 22:24:39.244890] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.849 22:24:39 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:25.849 malloc0 00:13:25.849 22:24:39 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:26.108 22:24:39 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eXtlaEkaqG 00:13:26.365 [2024-07-15 22:24:39.808711] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:26.365 22:24:39 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eXtlaEkaqG 00:13:26.365 22:24:39 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:26.365 22:24:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:26.365 22:24:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:26.365 22:24:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.eXtlaEkaqG' 00:13:26.365 22:24:39 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:26.365 22:24:39 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73311 00:13:26.365 22:24:39 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:26.365 22:24:39 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:26.365 22:24:39 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73311 /var/tmp/bdevperf.sock 00:13:26.365 22:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73311 ']' 00:13:26.365 22:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:26.365 22:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:26.365 22:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:26.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:26.365 22:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:26.365 22:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:26.365 [2024-07-15 22:24:39.880983] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:13:26.365 [2024-07-15 22:24:39.881208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73311 ] 00:13:26.623 [2024-07-15 22:24:40.023322] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.623 [2024-07-15 22:24:40.116826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.623 [2024-07-15 22:24:40.160051] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:27.241 22:24:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:27.241 22:24:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:27.241 22:24:40 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eXtlaEkaqG 00:13:27.500 [2024-07-15 22:24:40.878422] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:27.500 [2024-07-15 22:24:40.878535] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:27.500 TLSTESTn1 00:13:27.500 22:24:40 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:27.500 Running I/O for 10 seconds... 00:13:37.522 00:13:37.522 Latency(us) 00:13:37.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.522 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:37.523 Verification LBA range: start 0x0 length 0x2000 00:13:37.523 TLSTESTn1 : 10.01 5477.95 21.40 0.00 0.00 23332.39 3684.76 23687.71 00:13:37.523 =================================================================================================================== 00:13:37.523 Total : 5477.95 21.40 0.00 0.00 23332.39 3684.76 23687.71 00:13:37.523 0 00:13:37.523 22:24:51 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:37.523 22:24:51 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73311 00:13:37.523 22:24:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73311 ']' 00:13:37.523 22:24:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73311 00:13:37.523 22:24:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:37.523 22:24:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:37.523 22:24:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73311 00:13:37.523 killing process with pid 73311 00:13:37.523 Received shutdown signal, test time was about 10.000000 seconds 00:13:37.523 00:13:37.523 Latency(us) 00:13:37.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.523 =================================================================================================================== 00:13:37.523 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:37.523 22:24:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:37.523 22:24:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:37.523 22:24:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73311' 00:13:37.523 22:24:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73311 00:13:37.523 [2024-07-15 22:24:51.105070] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:37.523 22:24:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73311 00:13:37.782 22:24:51 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.eXtlaEkaqG 00:13:37.783 22:24:51 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eXtlaEkaqG 00:13:37.783 22:24:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:37.783 22:24:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eXtlaEkaqG 00:13:37.783 22:24:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:37.783 22:24:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:37.783 22:24:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:37.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:37.783 22:24:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:37.783 22:24:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eXtlaEkaqG 00:13:37.783 22:24:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:37.783 22:24:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:37.783 22:24:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:37.783 22:24:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.eXtlaEkaqG' 00:13:37.783 22:24:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:37.783 22:24:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73444 00:13:37.783 22:24:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:37.783 22:24:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73444 /var/tmp/bdevperf.sock 00:13:37.783 22:24:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73444 ']' 00:13:37.783 22:24:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:37.783 22:24:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:37.783 22:24:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:37.783 22:24:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:37.783 22:24:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:37.783 22:24:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:37.783 [2024-07-15 22:24:51.347933] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:13:37.783 [2024-07-15 22:24:51.347991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73444 ] 00:13:38.042 [2024-07-15 22:24:51.480651] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.042 [2024-07-15 22:24:51.561632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.042 [2024-07-15 22:24:51.604092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:38.609 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:38.609 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:38.609 22:24:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eXtlaEkaqG 00:13:38.867 [2024-07-15 22:24:52.375120] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:38.867 [2024-07-15 22:24:52.375201] bdev_nvme.c:6130:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:38.867 [2024-07-15 22:24:52.375212] bdev_nvme.c:6235:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.eXtlaEkaqG 00:13:38.867 request: 00:13:38.867 { 00:13:38.867 "name": "TLSTEST", 00:13:38.867 "trtype": "tcp", 00:13:38.867 "traddr": "10.0.0.2", 00:13:38.867 "adrfam": "ipv4", 00:13:38.867 "trsvcid": "4420", 00:13:38.867 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:38.867 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:38.867 "prchk_reftag": false, 00:13:38.867 "prchk_guard": false, 00:13:38.867 "hdgst": false, 00:13:38.867 "ddgst": false, 00:13:38.867 "psk": "/tmp/tmp.eXtlaEkaqG", 00:13:38.867 "method": "bdev_nvme_attach_controller", 00:13:38.867 "req_id": 1 00:13:38.867 } 00:13:38.867 Got JSON-RPC error response 00:13:38.867 response: 00:13:38.867 { 00:13:38.867 "code": -1, 00:13:38.867 "message": "Operation not permitted" 00:13:38.867 } 00:13:38.867 22:24:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73444 00:13:38.867 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73444 ']' 00:13:38.867 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73444 00:13:38.867 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:38.867 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:38.867 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73444 00:13:38.867 killing process with pid 73444 00:13:38.867 Received shutdown signal, test time was about 10.000000 seconds 00:13:38.867 00:13:38.867 Latency(us) 00:13:38.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.867 =================================================================================================================== 00:13:38.867 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:38.867 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:38.867 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:38.867 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73444' 00:13:38.867 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73444 00:13:38.867 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73444 00:13:39.128 22:24:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:39.128 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:39.128 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:39.128 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:39.128 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:39.128 22:24:52 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 73262 00:13:39.128 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73262 ']' 00:13:39.128 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73262 00:13:39.128 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:39.128 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:39.128 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73262 00:13:39.128 killing process with pid 73262 00:13:39.128 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:39.128 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:39.128 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73262' 00:13:39.128 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73262 00:13:39.128 [2024-07-15 22:24:52.656857] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:39.128 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73262 00:13:39.402 22:24:52 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:13:39.402 22:24:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:39.402 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:39.402 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:39.402 22:24:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73472 00:13:39.402 22:24:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73472 00:13:39.402 22:24:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:39.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.402 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73472 ']' 00:13:39.402 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.402 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:39.402 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.402 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:39.402 22:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:39.402 [2024-07-15 22:24:52.912949] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:13:39.402 [2024-07-15 22:24:52.913014] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.661 [2024-07-15 22:24:53.058406] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.661 [2024-07-15 22:24:53.141107] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.661 [2024-07-15 22:24:53.141172] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.661 [2024-07-15 22:24:53.141182] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.661 [2024-07-15 22:24:53.141190] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.661 [2024-07-15 22:24:53.141197] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.661 [2024-07-15 22:24:53.141226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.661 [2024-07-15 22:24:53.181653] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:40.226 22:24:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:40.226 22:24:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:40.226 22:24:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:40.226 22:24:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:40.226 22:24:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:40.226 22:24:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.226 22:24:53 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.eXtlaEkaqG 00:13:40.226 22:24:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:40.226 22:24:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.eXtlaEkaqG 00:13:40.226 22:24:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:13:40.226 22:24:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:40.226 22:24:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:13:40.226 22:24:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:40.226 22:24:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.eXtlaEkaqG 00:13:40.226 22:24:53 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.eXtlaEkaqG 00:13:40.226 22:24:53 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:40.484 [2024-07-15 22:24:53.974643] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:40.484 22:24:53 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:40.742 22:24:54 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:40.742 [2024-07-15 22:24:54.354061] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:40.742 [2024-07-15 22:24:54.354244] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.742 22:24:54 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:41.000 malloc0 00:13:41.001 22:24:54 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:41.258 22:24:54 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eXtlaEkaqG 00:13:41.516 [2024-07-15 22:24:55.005824] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:41.516 [2024-07-15 22:24:55.005865] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:13:41.516 [2024-07-15 22:24:55.005892] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:13:41.516 request: 00:13:41.516 { 00:13:41.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:41.516 "host": "nqn.2016-06.io.spdk:host1", 00:13:41.516 "psk": "/tmp/tmp.eXtlaEkaqG", 00:13:41.516 "method": "nvmf_subsystem_add_host", 00:13:41.516 "req_id": 1 00:13:41.516 } 00:13:41.516 Got JSON-RPC error response 00:13:41.516 response: 00:13:41.516 { 00:13:41.516 "code": -32603, 00:13:41.516 "message": "Internal error" 00:13:41.516 } 00:13:41.516 22:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:41.516 22:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:41.516 22:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:41.516 22:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:41.516 22:24:55 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 73472 00:13:41.516 22:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73472 ']' 00:13:41.516 22:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73472 00:13:41.516 22:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:41.516 22:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:41.516 22:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73472 00:13:41.516 killing process with pid 73472 00:13:41.516 22:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:41.516 22:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:41.516 22:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73472' 00:13:41.516 22:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73472 00:13:41.516 22:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73472 00:13:41.774 22:24:55 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.eXtlaEkaqG 00:13:41.774 22:24:55 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:13:41.774 22:24:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:41.774 22:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:41.774 22:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:41.774 22:24:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73535 00:13:41.774 22:24:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:41.774 22:24:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73535 00:13:41.774 22:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73535 ']' 00:13:41.774 22:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.774 22:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:41.774 22:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.774 22:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:41.774 22:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:41.774 [2024-07-15 22:24:55.313215] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:13:41.774 [2024-07-15 22:24:55.313280] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.032 [2024-07-15 22:24:55.446146] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.032 [2024-07-15 22:24:55.529462] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.032 [2024-07-15 22:24:55.529511] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.032 [2024-07-15 22:24:55.529520] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.032 [2024-07-15 22:24:55.529528] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.032 [2024-07-15 22:24:55.529535] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.032 [2024-07-15 22:24:55.529558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.032 [2024-07-15 22:24:55.569996] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:42.596 22:24:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:42.596 22:24:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:42.596 22:24:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:42.596 22:24:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:42.596 22:24:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:42.597 22:24:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.597 22:24:56 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.eXtlaEkaqG 00:13:42.597 22:24:56 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.eXtlaEkaqG 00:13:42.597 22:24:56 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:42.856 [2024-07-15 22:24:56.374506] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.856 22:24:56 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:43.114 22:24:56 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:43.373 [2024-07-15 22:24:56.757926] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:43.373 [2024-07-15 22:24:56.758102] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.373 22:24:56 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:43.373 malloc0 00:13:43.373 22:24:56 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:43.631 22:24:57 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eXtlaEkaqG 00:13:43.888 [2024-07-15 22:24:57.393761] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:43.888 22:24:57 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=73584 00:13:43.888 22:24:57 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:43.888 22:24:57 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:43.888 22:24:57 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 73584 /var/tmp/bdevperf.sock 00:13:43.888 22:24:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73584 ']' 00:13:43.888 22:24:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:43.888 22:24:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:43.888 22:24:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:43.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:43.888 22:24:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:43.888 22:24:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:43.888 [2024-07-15 22:24:57.459265] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:13:43.888 [2024-07-15 22:24:57.459497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73584 ] 00:13:44.146 [2024-07-15 22:24:57.600664] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.146 [2024-07-15 22:24:57.689404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.146 [2024-07-15 22:24:57.731722] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:44.712 22:24:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:44.712 22:24:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:44.712 22:24:58 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eXtlaEkaqG 00:13:44.982 [2024-07-15 22:24:58.445203] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:44.982 [2024-07-15 22:24:58.445312] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:44.982 TLSTESTn1 00:13:44.982 22:24:58 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:45.268 22:24:58 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:13:45.268 "subsystems": [ 00:13:45.268 { 00:13:45.268 "subsystem": "keyring", 00:13:45.268 "config": [] 00:13:45.268 }, 00:13:45.268 { 00:13:45.268 "subsystem": "iobuf", 00:13:45.268 "config": [ 00:13:45.268 { 00:13:45.268 "method": "iobuf_set_options", 00:13:45.268 "params": { 00:13:45.268 "small_pool_count": 8192, 00:13:45.268 "large_pool_count": 1024, 00:13:45.268 "small_bufsize": 8192, 00:13:45.268 "large_bufsize": 135168 00:13:45.268 } 00:13:45.268 } 00:13:45.268 ] 00:13:45.268 }, 00:13:45.268 { 00:13:45.268 "subsystem": "sock", 00:13:45.268 "config": [ 00:13:45.268 { 00:13:45.268 "method": "sock_set_default_impl", 00:13:45.268 "params": { 00:13:45.268 "impl_name": "uring" 00:13:45.268 } 00:13:45.268 }, 00:13:45.268 { 00:13:45.268 "method": "sock_impl_set_options", 00:13:45.268 "params": { 00:13:45.268 "impl_name": "ssl", 00:13:45.268 "recv_buf_size": 4096, 00:13:45.268 "send_buf_size": 4096, 00:13:45.268 "enable_recv_pipe": true, 00:13:45.268 "enable_quickack": false, 00:13:45.268 "enable_placement_id": 0, 00:13:45.268 "enable_zerocopy_send_server": true, 00:13:45.268 "enable_zerocopy_send_client": false, 00:13:45.268 "zerocopy_threshold": 0, 00:13:45.268 "tls_version": 0, 00:13:45.268 "enable_ktls": false 00:13:45.268 } 00:13:45.268 }, 00:13:45.268 { 00:13:45.268 "method": "sock_impl_set_options", 00:13:45.268 "params": { 00:13:45.268 "impl_name": "posix", 00:13:45.268 "recv_buf_size": 2097152, 00:13:45.268 "send_buf_size": 2097152, 00:13:45.268 "enable_recv_pipe": true, 00:13:45.268 "enable_quickack": false, 00:13:45.268 "enable_placement_id": 0, 00:13:45.268 "enable_zerocopy_send_server": true, 00:13:45.268 "enable_zerocopy_send_client": false, 00:13:45.268 "zerocopy_threshold": 0, 00:13:45.268 "tls_version": 0, 00:13:45.268 "enable_ktls": false 00:13:45.268 } 00:13:45.268 }, 00:13:45.268 { 00:13:45.268 "method": "sock_impl_set_options", 00:13:45.268 "params": { 00:13:45.268 "impl_name": "uring", 00:13:45.268 "recv_buf_size": 2097152, 00:13:45.268 "send_buf_size": 2097152, 00:13:45.268 "enable_recv_pipe": true, 00:13:45.268 "enable_quickack": false, 00:13:45.268 "enable_placement_id": 0, 00:13:45.268 "enable_zerocopy_send_server": false, 00:13:45.268 "enable_zerocopy_send_client": false, 00:13:45.268 "zerocopy_threshold": 0, 00:13:45.268 "tls_version": 0, 00:13:45.268 "enable_ktls": false 00:13:45.268 } 00:13:45.268 } 00:13:45.268 ] 00:13:45.268 }, 00:13:45.268 { 00:13:45.268 "subsystem": "vmd", 00:13:45.268 "config": [] 00:13:45.268 }, 00:13:45.268 { 00:13:45.268 "subsystem": "accel", 00:13:45.268 "config": [ 00:13:45.268 { 00:13:45.268 "method": "accel_set_options", 00:13:45.268 "params": { 00:13:45.268 "small_cache_size": 128, 00:13:45.268 "large_cache_size": 16, 00:13:45.268 "task_count": 2048, 00:13:45.268 "sequence_count": 2048, 00:13:45.268 "buf_count": 2048 00:13:45.268 } 00:13:45.268 } 00:13:45.268 ] 00:13:45.269 }, 00:13:45.269 { 00:13:45.269 "subsystem": "bdev", 00:13:45.269 "config": [ 00:13:45.269 { 00:13:45.269 "method": "bdev_set_options", 00:13:45.269 "params": { 00:13:45.269 "bdev_io_pool_size": 65535, 00:13:45.269 "bdev_io_cache_size": 256, 00:13:45.269 "bdev_auto_examine": true, 00:13:45.269 "iobuf_small_cache_size": 128, 00:13:45.269 "iobuf_large_cache_size": 16 00:13:45.269 } 00:13:45.269 }, 00:13:45.269 { 00:13:45.269 "method": "bdev_raid_set_options", 00:13:45.269 "params": { 00:13:45.269 "process_window_size_kb": 1024 00:13:45.269 } 00:13:45.269 }, 00:13:45.269 { 00:13:45.269 "method": "bdev_iscsi_set_options", 00:13:45.269 "params": { 00:13:45.269 "timeout_sec": 30 00:13:45.269 } 00:13:45.269 }, 00:13:45.269 { 00:13:45.269 "method": "bdev_nvme_set_options", 00:13:45.269 "params": { 00:13:45.269 "action_on_timeout": "none", 00:13:45.269 "timeout_us": 0, 00:13:45.269 "timeout_admin_us": 0, 00:13:45.269 "keep_alive_timeout_ms": 10000, 00:13:45.269 "arbitration_burst": 0, 00:13:45.269 "low_priority_weight": 0, 00:13:45.269 "medium_priority_weight": 0, 00:13:45.269 "high_priority_weight": 0, 00:13:45.269 "nvme_adminq_poll_period_us": 10000, 00:13:45.269 "nvme_ioq_poll_period_us": 0, 00:13:45.269 "io_queue_requests": 0, 00:13:45.269 "delay_cmd_submit": true, 00:13:45.269 "transport_retry_count": 4, 00:13:45.269 "bdev_retry_count": 3, 00:13:45.269 "transport_ack_timeout": 0, 00:13:45.269 "ctrlr_loss_timeout_sec": 0, 00:13:45.269 "reconnect_delay_sec": 0, 00:13:45.269 "fast_io_fail_timeout_sec": 0, 00:13:45.269 "disable_auto_failback": false, 00:13:45.269 "generate_uuids": false, 00:13:45.269 "transport_tos": 0, 00:13:45.269 "nvme_error_stat": false, 00:13:45.269 "rdma_srq_size": 0, 00:13:45.269 "io_path_stat": false, 00:13:45.269 "allow_accel_sequence": false, 00:13:45.269 "rdma_max_cq_size": 0, 00:13:45.269 "rdma_cm_event_timeout_ms": 0, 00:13:45.269 "dhchap_digests": [ 00:13:45.269 "sha256", 00:13:45.269 "sha384", 00:13:45.269 "sha512" 00:13:45.269 ], 00:13:45.269 "dhchap_dhgroups": [ 00:13:45.269 "null", 00:13:45.269 "ffdhe2048", 00:13:45.269 "ffdhe3072", 00:13:45.269 "ffdhe4096", 00:13:45.269 "ffdhe6144", 00:13:45.269 "ffdhe8192" 00:13:45.269 ] 00:13:45.269 } 00:13:45.269 }, 00:13:45.269 { 00:13:45.269 "method": "bdev_nvme_set_hotplug", 00:13:45.269 "params": { 00:13:45.269 "period_us": 100000, 00:13:45.269 "enable": false 00:13:45.269 } 00:13:45.269 }, 00:13:45.269 { 00:13:45.269 "method": "bdev_malloc_create", 00:13:45.269 "params": { 00:13:45.269 "name": "malloc0", 00:13:45.269 "num_blocks": 8192, 00:13:45.269 "block_size": 4096, 00:13:45.269 "physical_block_size": 4096, 00:13:45.269 "uuid": "2fc8e7c1-2f2e-4fbd-9483-e6c45014ad84", 00:13:45.269 "optimal_io_boundary": 0 00:13:45.269 } 00:13:45.269 }, 00:13:45.269 { 00:13:45.269 "method": "bdev_wait_for_examine" 00:13:45.269 } 00:13:45.269 ] 00:13:45.269 }, 00:13:45.269 { 00:13:45.269 "subsystem": "nbd", 00:13:45.269 "config": [] 00:13:45.269 }, 00:13:45.269 { 00:13:45.269 "subsystem": "scheduler", 00:13:45.269 "config": [ 00:13:45.269 { 00:13:45.269 "method": "framework_set_scheduler", 00:13:45.269 "params": { 00:13:45.269 "name": "static" 00:13:45.269 } 00:13:45.269 } 00:13:45.269 ] 00:13:45.269 }, 00:13:45.269 { 00:13:45.269 "subsystem": "nvmf", 00:13:45.269 "config": [ 00:13:45.269 { 00:13:45.269 "method": "nvmf_set_config", 00:13:45.269 "params": { 00:13:45.269 "discovery_filter": "match_any", 00:13:45.269 "admin_cmd_passthru": { 00:13:45.269 "identify_ctrlr": false 00:13:45.269 } 00:13:45.269 } 00:13:45.269 }, 00:13:45.269 { 00:13:45.269 "method": "nvmf_set_max_subsystems", 00:13:45.269 "params": { 00:13:45.269 "max_subsystems": 1024 00:13:45.269 } 00:13:45.269 }, 00:13:45.269 { 00:13:45.269 "method": "nvmf_set_crdt", 00:13:45.269 "params": { 00:13:45.269 "crdt1": 0, 00:13:45.269 "crdt2": 0, 00:13:45.269 "crdt3": 0 00:13:45.269 } 00:13:45.269 }, 00:13:45.269 { 00:13:45.269 "method": "nvmf_create_transport", 00:13:45.269 "params": { 00:13:45.269 "trtype": "TCP", 00:13:45.269 "max_queue_depth": 128, 00:13:45.269 "max_io_qpairs_per_ctrlr": 127, 00:13:45.269 "in_capsule_data_size": 4096, 00:13:45.269 "max_io_size": 131072, 00:13:45.269 "io_unit_size": 131072, 00:13:45.269 "max_aq_depth": 128, 00:13:45.269 "num_shared_buffers": 511, 00:13:45.269 "buf_cache_size": 4294967295, 00:13:45.269 "dif_insert_or_strip": false, 00:13:45.269 "zcopy": false, 00:13:45.269 "c2h_success": false, 00:13:45.269 "sock_priority": 0, 00:13:45.269 "abort_timeout_sec": 1, 00:13:45.269 "ack_timeout": 0, 00:13:45.269 "data_wr_pool_size": 0 00:13:45.269 } 00:13:45.269 }, 00:13:45.269 { 00:13:45.269 "method": "nvmf_create_subsystem", 00:13:45.269 "params": { 00:13:45.269 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:45.269 "allow_any_host": false, 00:13:45.269 "serial_number": "SPDK00000000000001", 00:13:45.269 "model_number": "SPDK bdev Controller", 00:13:45.269 "max_namespaces": 10, 00:13:45.269 "min_cntlid": 1, 00:13:45.269 "max_cntlid": 65519, 00:13:45.269 "ana_reporting": false 00:13:45.269 } 00:13:45.269 }, 00:13:45.269 { 00:13:45.269 "method": "nvmf_subsystem_add_host", 00:13:45.269 "params": { 00:13:45.269 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:45.269 "host": "nqn.2016-06.io.spdk:host1", 00:13:45.269 "psk": "/tmp/tmp.eXtlaEkaqG" 00:13:45.269 } 00:13:45.269 }, 00:13:45.269 { 00:13:45.269 "method": "nvmf_subsystem_add_ns", 00:13:45.269 "params": { 00:13:45.269 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:45.269 "namespace": { 00:13:45.269 "nsid": 1, 00:13:45.269 "bdev_name": "malloc0", 00:13:45.269 "nguid": "2FC8E7C12F2E4FBD9483E6C45014AD84", 00:13:45.269 "uuid": "2fc8e7c1-2f2e-4fbd-9483-e6c45014ad84", 00:13:45.269 "no_auto_visible": false 00:13:45.269 } 00:13:45.269 } 00:13:45.269 }, 00:13:45.269 { 00:13:45.269 "method": "nvmf_subsystem_add_listener", 00:13:45.269 "params": { 00:13:45.269 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:45.269 "listen_address": { 00:13:45.269 "trtype": "TCP", 00:13:45.269 "adrfam": "IPv4", 00:13:45.269 "traddr": "10.0.0.2", 00:13:45.269 "trsvcid": "4420" 00:13:45.269 }, 00:13:45.269 "secure_channel": true 00:13:45.269 } 00:13:45.269 } 00:13:45.269 ] 00:13:45.269 } 00:13:45.269 ] 00:13:45.269 }' 00:13:45.269 22:24:58 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:45.528 22:24:59 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:13:45.528 "subsystems": [ 00:13:45.528 { 00:13:45.528 "subsystem": "keyring", 00:13:45.528 "config": [] 00:13:45.528 }, 00:13:45.528 { 00:13:45.528 "subsystem": "iobuf", 00:13:45.528 "config": [ 00:13:45.528 { 00:13:45.528 "method": "iobuf_set_options", 00:13:45.528 "params": { 00:13:45.528 "small_pool_count": 8192, 00:13:45.528 "large_pool_count": 1024, 00:13:45.528 "small_bufsize": 8192, 00:13:45.528 "large_bufsize": 135168 00:13:45.528 } 00:13:45.528 } 00:13:45.528 ] 00:13:45.528 }, 00:13:45.528 { 00:13:45.528 "subsystem": "sock", 00:13:45.528 "config": [ 00:13:45.528 { 00:13:45.528 "method": "sock_set_default_impl", 00:13:45.528 "params": { 00:13:45.528 "impl_name": "uring" 00:13:45.528 } 00:13:45.528 }, 00:13:45.528 { 00:13:45.528 "method": "sock_impl_set_options", 00:13:45.528 "params": { 00:13:45.528 "impl_name": "ssl", 00:13:45.528 "recv_buf_size": 4096, 00:13:45.528 "send_buf_size": 4096, 00:13:45.528 "enable_recv_pipe": true, 00:13:45.528 "enable_quickack": false, 00:13:45.528 "enable_placement_id": 0, 00:13:45.528 "enable_zerocopy_send_server": true, 00:13:45.528 "enable_zerocopy_send_client": false, 00:13:45.528 "zerocopy_threshold": 0, 00:13:45.528 "tls_version": 0, 00:13:45.528 "enable_ktls": false 00:13:45.528 } 00:13:45.528 }, 00:13:45.528 { 00:13:45.528 "method": "sock_impl_set_options", 00:13:45.528 "params": { 00:13:45.528 "impl_name": "posix", 00:13:45.528 "recv_buf_size": 2097152, 00:13:45.528 "send_buf_size": 2097152, 00:13:45.528 "enable_recv_pipe": true, 00:13:45.528 "enable_quickack": false, 00:13:45.528 "enable_placement_id": 0, 00:13:45.528 "enable_zerocopy_send_server": true, 00:13:45.528 "enable_zerocopy_send_client": false, 00:13:45.528 "zerocopy_threshold": 0, 00:13:45.528 "tls_version": 0, 00:13:45.528 "enable_ktls": false 00:13:45.528 } 00:13:45.528 }, 00:13:45.528 { 00:13:45.528 "method": "sock_impl_set_options", 00:13:45.528 "params": { 00:13:45.528 "impl_name": "uring", 00:13:45.528 "recv_buf_size": 2097152, 00:13:45.528 "send_buf_size": 2097152, 00:13:45.528 "enable_recv_pipe": true, 00:13:45.528 "enable_quickack": false, 00:13:45.528 "enable_placement_id": 0, 00:13:45.528 "enable_zerocopy_send_server": false, 00:13:45.528 "enable_zerocopy_send_client": false, 00:13:45.528 "zerocopy_threshold": 0, 00:13:45.528 "tls_version": 0, 00:13:45.528 "enable_ktls": false 00:13:45.528 } 00:13:45.528 } 00:13:45.528 ] 00:13:45.528 }, 00:13:45.528 { 00:13:45.528 "subsystem": "vmd", 00:13:45.528 "config": [] 00:13:45.528 }, 00:13:45.528 { 00:13:45.528 "subsystem": "accel", 00:13:45.528 "config": [ 00:13:45.528 { 00:13:45.528 "method": "accel_set_options", 00:13:45.528 "params": { 00:13:45.528 "small_cache_size": 128, 00:13:45.528 "large_cache_size": 16, 00:13:45.528 "task_count": 2048, 00:13:45.528 "sequence_count": 2048, 00:13:45.528 "buf_count": 2048 00:13:45.528 } 00:13:45.528 } 00:13:45.528 ] 00:13:45.528 }, 00:13:45.528 { 00:13:45.528 "subsystem": "bdev", 00:13:45.528 "config": [ 00:13:45.528 { 00:13:45.528 "method": "bdev_set_options", 00:13:45.528 "params": { 00:13:45.528 "bdev_io_pool_size": 65535, 00:13:45.528 "bdev_io_cache_size": 256, 00:13:45.528 "bdev_auto_examine": true, 00:13:45.528 "iobuf_small_cache_size": 128, 00:13:45.528 "iobuf_large_cache_size": 16 00:13:45.528 } 00:13:45.528 }, 00:13:45.528 { 00:13:45.528 "method": "bdev_raid_set_options", 00:13:45.528 "params": { 00:13:45.528 "process_window_size_kb": 1024 00:13:45.528 } 00:13:45.528 }, 00:13:45.528 { 00:13:45.528 "method": "bdev_iscsi_set_options", 00:13:45.528 "params": { 00:13:45.528 "timeout_sec": 30 00:13:45.528 } 00:13:45.528 }, 00:13:45.528 { 00:13:45.528 "method": "bdev_nvme_set_options", 00:13:45.528 "params": { 00:13:45.528 "action_on_timeout": "none", 00:13:45.528 "timeout_us": 0, 00:13:45.528 "timeout_admin_us": 0, 00:13:45.528 "keep_alive_timeout_ms": 10000, 00:13:45.528 "arbitration_burst": 0, 00:13:45.528 "low_priority_weight": 0, 00:13:45.528 "medium_priority_weight": 0, 00:13:45.528 "high_priority_weight": 0, 00:13:45.528 "nvme_adminq_poll_period_us": 10000, 00:13:45.528 "nvme_ioq_poll_period_us": 0, 00:13:45.528 "io_queue_requests": 512, 00:13:45.528 "delay_cmd_submit": true, 00:13:45.528 "transport_retry_count": 4, 00:13:45.528 "bdev_retry_count": 3, 00:13:45.528 "transport_ack_timeout": 0, 00:13:45.528 "ctrlr_loss_timeout_sec": 0, 00:13:45.528 "reconnect_delay_sec": 0, 00:13:45.528 "fast_io_fail_timeout_sec": 0, 00:13:45.528 "disable_auto_failback": false, 00:13:45.529 "generate_uuids": false, 00:13:45.529 "transport_tos": 0, 00:13:45.529 "nvme_error_stat": false, 00:13:45.529 "rdma_srq_size": 0, 00:13:45.529 "io_path_stat": false, 00:13:45.529 "allow_accel_sequence": false, 00:13:45.529 "rdma_max_cq_size": 0, 00:13:45.529 "rdma_cm_event_timeout_ms": 0, 00:13:45.529 "dhchap_digests": [ 00:13:45.529 "sha256", 00:13:45.529 "sha384", 00:13:45.529 "sha512" 00:13:45.529 ], 00:13:45.529 "dhchap_dhgroups": [ 00:13:45.529 "null", 00:13:45.529 "ffdhe2048", 00:13:45.529 "ffdhe3072", 00:13:45.529 "ffdhe4096", 00:13:45.529 "ffdhe6144", 00:13:45.529 "ffdhe8192" 00:13:45.529 ] 00:13:45.529 } 00:13:45.529 }, 00:13:45.529 { 00:13:45.529 "method": "bdev_nvme_attach_controller", 00:13:45.529 "params": { 00:13:45.529 "name": "TLSTEST", 00:13:45.529 "trtype": "TCP", 00:13:45.529 "adrfam": "IPv4", 00:13:45.529 "traddr": "10.0.0.2", 00:13:45.529 "trsvcid": "4420", 00:13:45.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:45.529 "prchk_reftag": false, 00:13:45.529 "prchk_guard": false, 00:13:45.529 "ctrlr_loss_timeout_sec": 0, 00:13:45.529 "reconnect_delay_sec": 0, 00:13:45.529 "fast_io_fail_timeout_sec": 0, 00:13:45.529 "psk": "/tmp/tmp.eXtlaEkaqG", 00:13:45.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:45.529 "hdgst": false, 00:13:45.529 "ddgst": false 00:13:45.529 } 00:13:45.529 }, 00:13:45.529 { 00:13:45.529 "method": "bdev_nvme_set_hotplug", 00:13:45.529 "params": { 00:13:45.529 "period_us": 100000, 00:13:45.529 "enable": false 00:13:45.529 } 00:13:45.529 }, 00:13:45.529 { 00:13:45.529 "method": "bdev_wait_for_examine" 00:13:45.529 } 00:13:45.529 ] 00:13:45.529 }, 00:13:45.529 { 00:13:45.529 "subsystem": "nbd", 00:13:45.529 "config": [] 00:13:45.529 } 00:13:45.529 ] 00:13:45.529 }' 00:13:45.529 22:24:59 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 73584 00:13:45.529 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73584 ']' 00:13:45.529 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73584 00:13:45.529 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:45.529 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:45.529 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73584 00:13:45.529 killing process with pid 73584 00:13:45.529 Received shutdown signal, test time was about 10.000000 seconds 00:13:45.529 00:13:45.529 Latency(us) 00:13:45.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.529 =================================================================================================================== 00:13:45.529 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:45.529 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:45.529 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:45.529 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73584' 00:13:45.529 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73584 00:13:45.529 [2024-07-15 22:24:59.100882] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:45.529 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73584 00:13:45.788 22:24:59 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 73535 00:13:45.788 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73535 ']' 00:13:45.788 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73535 00:13:45.788 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:45.788 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:45.788 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73535 00:13:45.788 killing process with pid 73535 00:13:45.788 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:45.788 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:45.788 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73535' 00:13:45.788 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73535 00:13:45.788 [2024-07-15 22:24:59.325454] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:45.788 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73535 00:13:46.047 22:24:59 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:46.047 22:24:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:46.047 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:46.047 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:46.047 22:24:59 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:13:46.047 "subsystems": [ 00:13:46.047 { 00:13:46.047 "subsystem": "keyring", 00:13:46.047 "config": [] 00:13:46.047 }, 00:13:46.047 { 00:13:46.047 "subsystem": "iobuf", 00:13:46.047 "config": [ 00:13:46.047 { 00:13:46.047 "method": "iobuf_set_options", 00:13:46.047 "params": { 00:13:46.047 "small_pool_count": 8192, 00:13:46.047 "large_pool_count": 1024, 00:13:46.047 "small_bufsize": 8192, 00:13:46.047 "large_bufsize": 135168 00:13:46.047 } 00:13:46.047 } 00:13:46.047 ] 00:13:46.047 }, 00:13:46.047 { 00:13:46.047 "subsystem": "sock", 00:13:46.047 "config": [ 00:13:46.047 { 00:13:46.047 "method": "sock_set_default_impl", 00:13:46.047 "params": { 00:13:46.047 "impl_name": "uring" 00:13:46.047 } 00:13:46.047 }, 00:13:46.047 { 00:13:46.047 "method": "sock_impl_set_options", 00:13:46.047 "params": { 00:13:46.047 "impl_name": "ssl", 00:13:46.047 "recv_buf_size": 4096, 00:13:46.047 "send_buf_size": 4096, 00:13:46.047 "enable_recv_pipe": true, 00:13:46.047 "enable_quickack": false, 00:13:46.047 "enable_placement_id": 0, 00:13:46.047 "enable_zerocopy_send_server": true, 00:13:46.047 "enable_zerocopy_send_client": false, 00:13:46.047 "zerocopy_threshold": 0, 00:13:46.047 "tls_version": 0, 00:13:46.047 "enable_ktls": false 00:13:46.047 } 00:13:46.047 }, 00:13:46.048 { 00:13:46.048 "method": "sock_impl_set_options", 00:13:46.048 "params": { 00:13:46.048 "impl_name": "posix", 00:13:46.048 "recv_buf_size": 2097152, 00:13:46.048 "send_buf_size": 2097152, 00:13:46.048 "enable_recv_pipe": true, 00:13:46.048 "enable_quickack": false, 00:13:46.048 "enable_placement_id": 0, 00:13:46.048 "enable_zerocopy_send_server": true, 00:13:46.048 "enable_zerocopy_send_client": false, 00:13:46.048 "zerocopy_threshold": 0, 00:13:46.048 "tls_version": 0, 00:13:46.048 "enable_ktls": false 00:13:46.048 } 00:13:46.048 }, 00:13:46.048 { 00:13:46.048 "method": "sock_impl_set_options", 00:13:46.048 "params": { 00:13:46.048 "impl_name": "uring", 00:13:46.048 "recv_buf_size": 2097152, 00:13:46.048 "send_buf_size": 2097152, 00:13:46.048 "enable_recv_pipe": true, 00:13:46.048 "enable_quickack": false, 00:13:46.048 "enable_placement_id": 0, 00:13:46.048 "enable_zerocopy_send_server": false, 00:13:46.048 "enable_zerocopy_send_client": false, 00:13:46.048 "zerocopy_threshold": 0, 00:13:46.048 "tls_version": 0, 00:13:46.048 "enable_ktls": false 00:13:46.048 } 00:13:46.048 } 00:13:46.048 ] 00:13:46.048 }, 00:13:46.048 { 00:13:46.048 "subsystem": "vmd", 00:13:46.048 "config": [] 00:13:46.048 }, 00:13:46.048 { 00:13:46.048 "subsystem": "accel", 00:13:46.048 "config": [ 00:13:46.048 { 00:13:46.048 "method": "accel_set_options", 00:13:46.048 "params": { 00:13:46.048 "small_cache_size": 128, 00:13:46.048 "large_cache_size": 16, 00:13:46.048 "task_count": 2048, 00:13:46.048 "sequence_count": 2048, 00:13:46.048 "buf_count": 2048 00:13:46.048 } 00:13:46.048 } 00:13:46.048 ] 00:13:46.048 }, 00:13:46.048 { 00:13:46.048 "subsystem": "bdev", 00:13:46.048 "config": [ 00:13:46.048 { 00:13:46.048 "method": "bdev_set_options", 00:13:46.048 "params": { 00:13:46.048 "bdev_io_pool_size": 65535, 00:13:46.048 "bdev_io_cache_size": 256, 00:13:46.048 "bdev_auto_examine": true, 00:13:46.048 "iobuf_small_cache_size": 128, 00:13:46.048 "iobuf_large_cache_size": 16 00:13:46.048 } 00:13:46.048 }, 00:13:46.048 { 00:13:46.048 "method": "bdev_raid_set_options", 00:13:46.048 "params": { 00:13:46.048 "process_window_size_kb": 1024 00:13:46.048 } 00:13:46.048 }, 00:13:46.048 { 00:13:46.048 "method": "bdev_iscsi_set_options", 00:13:46.048 "params": { 00:13:46.048 "timeout_sec": 30 00:13:46.048 } 00:13:46.048 }, 00:13:46.048 { 00:13:46.048 "method": "bdev_nvme_set_options", 00:13:46.048 "params": { 00:13:46.048 "action_on_timeout": "none", 00:13:46.048 "timeout_us": 0, 00:13:46.048 "timeout_admin_us": 0, 00:13:46.048 "keep_alive_timeout_ms": 10000, 00:13:46.048 "arbitration_burst": 0, 00:13:46.048 "low_priority_weight": 0, 00:13:46.048 "medium_priority_weight": 0, 00:13:46.048 "high_priority_weight": 0, 00:13:46.048 "nvme_adminq_poll_period_us": 10000, 00:13:46.048 "nvme_ioq_poll_period_us": 0, 00:13:46.048 "io_queue_requests": 0, 00:13:46.048 "delay_cmd_submit": true, 00:13:46.048 "transport_retry_count": 4, 00:13:46.048 "bdev_retry_count": 3, 00:13:46.048 "transport_ack_timeout": 0, 00:13:46.048 "ctrlr_loss_timeout_sec": 0, 00:13:46.048 "reconnect_delay_sec": 0, 00:13:46.048 "fast_io_fail_timeout_sec": 0, 00:13:46.048 "disable_auto_failback": false, 00:13:46.048 "generate_uuids": false, 00:13:46.048 "transport_tos": 0, 00:13:46.048 "nvme_error_stat": false, 00:13:46.048 "rdma_srq_size": 0, 00:13:46.048 "io_path_stat": false, 00:13:46.048 "allow_accel_sequence": false, 00:13:46.048 "rdma_max_cq_size": 0, 00:13:46.048 "rdma_cm_event_timeout_ms": 0, 00:13:46.048 "dhchap_digests": [ 00:13:46.048 "sha256", 00:13:46.048 "sha384", 00:13:46.048 "sha512" 00:13:46.048 ], 00:13:46.048 "dhchap_dhgroups": [ 00:13:46.048 "null", 00:13:46.048 "ffdhe2048", 00:13:46.048 "ffdhe3072", 00:13:46.048 "ffdhe4096", 00:13:46.048 "ffdhe6144", 00:13:46.048 "ffdhe8192" 00:13:46.048 ] 00:13:46.048 } 00:13:46.048 }, 00:13:46.048 { 00:13:46.048 "method": "bdev_nvme_set_hotplug", 00:13:46.048 "params": { 00:13:46.048 "period_us": 100000, 00:13:46.048 "enable": false 00:13:46.048 } 00:13:46.048 }, 00:13:46.048 { 00:13:46.048 "method": "bdev_malloc_create", 00:13:46.048 "params": { 00:13:46.048 "name": "malloc0", 00:13:46.048 "num_blocks": 8192, 00:13:46.048 "block_size": 4096, 00:13:46.048 "physical_block_size": 4096, 00:13:46.048 "uuid": "2fc8e7c1-2f2e-4fbd-9483-e6c45014ad84", 00:13:46.048 "optimal_io_boundary": 0 00:13:46.048 } 00:13:46.048 }, 00:13:46.048 { 00:13:46.048 "method": "bdev_wait_for_examine" 00:13:46.048 } 00:13:46.048 ] 00:13:46.048 }, 00:13:46.048 { 00:13:46.048 "subsystem": "nbd", 00:13:46.048 "config": [] 00:13:46.048 }, 00:13:46.048 { 00:13:46.048 "subsystem": "scheduler", 00:13:46.048 "config": [ 00:13:46.048 { 00:13:46.048 "method": "framework_set_scheduler", 00:13:46.048 "params": { 00:13:46.048 "name": "static" 00:13:46.048 } 00:13:46.048 } 00:13:46.048 ] 00:13:46.048 }, 00:13:46.048 { 00:13:46.048 "subsystem": "nvmf", 00:13:46.048 "config": [ 00:13:46.048 { 00:13:46.048 "method": "nvmf_set_config", 00:13:46.048 "params": { 00:13:46.048 "discovery_filter": "match_any", 00:13:46.048 "admin_cmd_passthru": { 00:13:46.048 "identify_ctrlr": false 00:13:46.048 } 00:13:46.048 } 00:13:46.048 }, 00:13:46.048 { 00:13:46.049 "method": "nvmf_set_max_subsystems", 00:13:46.049 "params": { 00:13:46.049 "max_subsystems": 1024 00:13:46.049 } 00:13:46.049 }, 00:13:46.049 { 00:13:46.049 "method": "nvmf_set_crdt", 00:13:46.049 "params": { 00:13:46.049 "crdt1": 0, 00:13:46.049 "crdt2": 0, 00:13:46.049 "crdt3": 0 00:13:46.049 } 00:13:46.049 }, 00:13:46.049 { 00:13:46.049 "method": "nvmf_create_transport", 00:13:46.049 "params": { 00:13:46.049 "trtype": "TCP", 00:13:46.049 "max_queue_depth": 128, 00:13:46.049 "max_io_qpairs_per_ctrlr": 127, 00:13:46.049 "in_capsule_data_size": 4096, 00:13:46.049 "max_io_size": 131072, 00:13:46.049 "io_unit_size": 131072, 00:13:46.049 "max_aq_depth": 128, 00:13:46.049 "num_shared_buffers": 511, 00:13:46.049 "buf_cache_size": 4294967295, 00:13:46.049 "dif_insert_or_strip": false, 00:13:46.049 "zcopy": false, 00:13:46.049 "c2h_success": false, 00:13:46.049 "sock_priority": 0, 00:13:46.049 "abort_timeout_sec": 1, 00:13:46.049 "ack_timeout": 0, 00:13:46.049 "data_wr_pool_size": 0 00:13:46.049 } 00:13:46.049 }, 00:13:46.049 { 00:13:46.049 "method": "nvmf_create_subsystem", 00:13:46.049 "params": { 00:13:46.049 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.049 "allow_any_host": false, 00:13:46.049 "serial_number": "SPDK00000000000001", 00:13:46.049 "model_number": "SPDK bdev Controller", 00:13:46.049 "max_namespaces": 10, 00:13:46.049 "min_cntlid": 1, 00:13:46.049 "max_cntlid": 65519, 00:13:46.049 "ana_reporting": false 00:13:46.049 } 00:13:46.049 }, 00:13:46.049 { 00:13:46.049 "method": "nvmf_subsystem_add_host", 00:13:46.049 "params": { 00:13:46.049 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.049 "host": "nqn.2016-06.io.spdk:host1", 00:13:46.049 "psk": "/tmp/tmp.eXtlaEkaqG" 00:13:46.049 } 00:13:46.049 }, 00:13:46.049 { 00:13:46.049 "method": "nvmf_subsystem_add_ns", 00:13:46.049 "params": { 00:13:46.049 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.049 "namespace": { 00:13:46.049 "nsid": 1, 00:13:46.049 "bdev_name": "malloc0", 00:13:46.049 "nguid": "2FC8E7C12F2E4FBD9483E6C45014AD84", 00:13:46.049 "uuid": "2fc8e7c1-2f2e-4fbd-9483-e6c45014ad84", 00:13:46.049 "no_auto_visible": false 00:13:46.049 } 00:13:46.049 } 00:13:46.049 }, 00:13:46.049 { 00:13:46.049 "method": "nvmf_subsystem_add_listener", 00:13:46.049 "params": { 00:13:46.049 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.049 "listen_address": { 00:13:46.049 "trtype": "TCP", 00:13:46.049 "adrfam": "IPv4", 00:13:46.049 "traddr": "10.0.0.2", 00:13:46.049 "trsvcid": "4420" 00:13:46.049 }, 00:13:46.049 "secure_channel": true 00:13:46.049 } 00:13:46.049 } 00:13:46.049 ] 00:13:46.049 } 00:13:46.049 ] 00:13:46.049 }' 00:13:46.049 22:24:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73627 00:13:46.049 22:24:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:46.049 22:24:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73627 00:13:46.049 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73627 ']' 00:13:46.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.049 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.049 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:46.049 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.049 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:46.049 22:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:46.049 [2024-07-15 22:24:59.566763] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:13:46.049 [2024-07-15 22:24:59.566821] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.308 [2024-07-15 22:24:59.710097] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.308 [2024-07-15 22:24:59.798845] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.308 [2024-07-15 22:24:59.799069] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.308 [2024-07-15 22:24:59.799156] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:46.308 [2024-07-15 22:24:59.799202] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:46.308 [2024-07-15 22:24:59.799226] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.308 [2024-07-15 22:24:59.799319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.567 [2024-07-15 22:24:59.952895] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:46.567 [2024-07-15 22:25:00.013635] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.567 [2024-07-15 22:25:00.029469] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:46.567 [2024-07-15 22:25:00.045440] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:46.567 [2024-07-15 22:25:00.045895] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.825 22:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:46.825 22:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:46.825 22:25:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:46.825 22:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:46.825 22:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.084 22:25:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.084 22:25:00 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=73659 00:13:47.084 22:25:00 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 73659 /var/tmp/bdevperf.sock 00:13:47.084 22:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73659 ']' 00:13:47.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:47.084 22:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:47.084 22:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:47.084 22:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:47.084 22:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:47.084 22:25:00 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:13:47.084 22:25:00 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:13:47.084 "subsystems": [ 00:13:47.084 { 00:13:47.084 "subsystem": "keyring", 00:13:47.084 "config": [] 00:13:47.084 }, 00:13:47.084 { 00:13:47.084 "subsystem": "iobuf", 00:13:47.084 "config": [ 00:13:47.084 { 00:13:47.084 "method": "iobuf_set_options", 00:13:47.084 "params": { 00:13:47.084 "small_pool_count": 8192, 00:13:47.084 "large_pool_count": 1024, 00:13:47.084 "small_bufsize": 8192, 00:13:47.084 "large_bufsize": 135168 00:13:47.084 } 00:13:47.084 } 00:13:47.084 ] 00:13:47.084 }, 00:13:47.084 { 00:13:47.084 "subsystem": "sock", 00:13:47.084 "config": [ 00:13:47.084 { 00:13:47.084 "method": "sock_set_default_impl", 00:13:47.084 "params": { 00:13:47.084 "impl_name": "uring" 00:13:47.084 } 00:13:47.084 }, 00:13:47.084 { 00:13:47.084 "method": "sock_impl_set_options", 00:13:47.084 "params": { 00:13:47.084 "impl_name": "ssl", 00:13:47.084 "recv_buf_size": 4096, 00:13:47.084 "send_buf_size": 4096, 00:13:47.084 "enable_recv_pipe": true, 00:13:47.084 "enable_quickack": false, 00:13:47.084 "enable_placement_id": 0, 00:13:47.084 "enable_zerocopy_send_server": true, 00:13:47.084 "enable_zerocopy_send_client": false, 00:13:47.084 "zerocopy_threshold": 0, 00:13:47.084 "tls_version": 0, 00:13:47.084 "enable_ktls": false 00:13:47.084 } 00:13:47.084 }, 00:13:47.084 { 00:13:47.084 "method": "sock_impl_set_options", 00:13:47.084 "params": { 00:13:47.084 "impl_name": "posix", 00:13:47.084 "recv_buf_size": 2097152, 00:13:47.084 "send_buf_size": 2097152, 00:13:47.084 "enable_recv_pipe": true, 00:13:47.084 "enable_quickack": false, 00:13:47.084 "enable_placement_id": 0, 00:13:47.084 "enable_zerocopy_send_server": true, 00:13:47.084 "enable_zerocopy_send_client": false, 00:13:47.084 "zerocopy_threshold": 0, 00:13:47.084 "tls_version": 0, 00:13:47.084 "enable_ktls": false 00:13:47.084 } 00:13:47.084 }, 00:13:47.084 { 00:13:47.084 "method": "sock_impl_set_options", 00:13:47.084 "params": { 00:13:47.084 "impl_name": "uring", 00:13:47.084 "recv_buf_size": 2097152, 00:13:47.084 "send_buf_size": 2097152, 00:13:47.084 "enable_recv_pipe": true, 00:13:47.084 "enable_quickack": false, 00:13:47.084 "enable_placement_id": 0, 00:13:47.084 "enable_zerocopy_send_server": false, 00:13:47.084 "enable_zerocopy_send_client": false, 00:13:47.084 "zerocopy_threshold": 0, 00:13:47.084 "tls_version": 0, 00:13:47.084 "enable_ktls": false 00:13:47.084 } 00:13:47.084 } 00:13:47.084 ] 00:13:47.084 }, 00:13:47.084 { 00:13:47.084 "subsystem": "vmd", 00:13:47.084 "config": [] 00:13:47.084 }, 00:13:47.084 { 00:13:47.084 "subsystem": "accel", 00:13:47.084 "config": [ 00:13:47.084 { 00:13:47.084 "method": "accel_set_options", 00:13:47.084 "params": { 00:13:47.084 "small_cache_size": 128, 00:13:47.084 "large_cache_size": 16, 00:13:47.084 "task_count": 2048, 00:13:47.084 "sequence_count": 2048, 00:13:47.084 "buf_count": 2048 00:13:47.084 } 00:13:47.084 } 00:13:47.084 ] 00:13:47.084 }, 00:13:47.084 { 00:13:47.084 "subsystem": "bdev", 00:13:47.084 "config": [ 00:13:47.084 { 00:13:47.084 "method": "bdev_set_options", 00:13:47.084 "params": { 00:13:47.084 "bdev_io_pool_size": 65535, 00:13:47.084 "bdev_io_cache_size": 256, 00:13:47.084 "bdev_auto_examine": true, 00:13:47.084 "iobuf_small_cache_size": 128, 00:13:47.084 "iobuf_large_cache_size": 16 00:13:47.084 } 00:13:47.084 }, 00:13:47.084 { 00:13:47.084 "method": "bdev_raid_set_options", 00:13:47.084 "params": { 00:13:47.084 "process_window_size_kb": 1024 00:13:47.084 } 00:13:47.084 }, 00:13:47.084 { 00:13:47.084 "method": "bdev_iscsi_set_options", 00:13:47.084 "params": { 00:13:47.084 "timeout_sec": 30 00:13:47.084 } 00:13:47.084 }, 00:13:47.084 { 00:13:47.084 "method": "bdev_nvme_set_options", 00:13:47.084 "params": { 00:13:47.084 "action_on_timeout": "none", 00:13:47.084 "timeout_us": 0, 00:13:47.085 "timeout_admin_us": 0, 00:13:47.085 "keep_alive_timeout_ms": 10000, 00:13:47.085 "arbitration_burst": 0, 00:13:47.085 "low_priority_weight": 0, 00:13:47.085 "medium_priority_weight": 0, 00:13:47.085 "high_priority_weight": 0, 00:13:47.085 "nvme_adminq_poll_period_us": 10000, 00:13:47.085 "nvme_ioq_poll_period_us": 0, 00:13:47.085 "io_queue_requests": 512, 00:13:47.085 "delay_cmd_submit": true, 00:13:47.085 "transport_retry_count": 4, 00:13:47.085 "bdev_retry_count": 3, 00:13:47.085 "transport_ack_timeout": 0, 00:13:47.085 "ctrlr_loss_timeout_sec": 0, 00:13:47.085 "reconnect_delay_sec": 0, 00:13:47.085 "fast_io_fail_timeout_sec": 0, 00:13:47.085 "disable_auto_failback": false, 00:13:47.085 "generate_uuids": false, 00:13:47.085 "transport_tos": 0, 00:13:47.085 "nvme_error_stat": false, 00:13:47.085 "rdma_srq_size": 0, 00:13:47.085 "io_path_stat": false, 00:13:47.085 "allow_accel_sequence": false, 00:13:47.085 "rdma_max_cq_size": 0, 00:13:47.085 "rdma_cm_event_timeout_ms": 0, 00:13:47.085 "dhchap_digests": [ 00:13:47.085 "sha256", 00:13:47.085 "sha384", 00:13:47.085 "sha512" 00:13:47.085 ], 00:13:47.085 "dhchap_dhgroups": [ 00:13:47.085 "null", 00:13:47.085 "ffdhe2048", 00:13:47.085 "ffdhe3072", 00:13:47.085 "ffdhe4096", 00:13:47.085 "ffdhe6144", 00:13:47.085 "ffdhe8192" 00:13:47.085 ] 00:13:47.085 } 00:13:47.085 }, 00:13:47.085 { 00:13:47.085 "method": "bdev_nvme_attach_controller", 00:13:47.085 "params": { 00:13:47.085 "name": "TLSTEST", 00:13:47.085 "trtype": "TCP", 00:13:47.085 "adrfam": "IPv4", 00:13:47.085 "traddr": "10.0.0.2", 00:13:47.085 "trsvcid": "4420", 00:13:47.085 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.085 "prchk_reftag": false, 00:13:47.085 "prchk_guard": false, 00:13:47.085 "ctrlr_loss_timeout_sec": 0, 00:13:47.085 "reconnect_delay_sec": 0, 00:13:47.085 "fast_io_fail_timeout_sec": 0, 00:13:47.085 "psk": "/tmp/tmp.eXtlaEkaqG", 00:13:47.085 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:47.085 "hdgst": false, 00:13:47.085 "ddgst": false 00:13:47.085 } 00:13:47.085 }, 00:13:47.085 { 00:13:47.085 "method": "bdev_nvme_set_hotplug", 00:13:47.085 "params": { 00:13:47.085 "period_us": 100000, 00:13:47.085 "enable": false 00:13:47.085 } 00:13:47.085 }, 00:13:47.085 { 00:13:47.085 "method": "bdev_wait_for_examine" 00:13:47.085 } 00:13:47.085 ] 00:13:47.085 }, 00:13:47.085 { 00:13:47.085 "subsystem": "nbd", 00:13:47.085 "config": [] 00:13:47.085 } 00:13:47.085 ] 00:13:47.085 }' 00:13:47.085 22:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.085 [2024-07-15 22:25:00.530799] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:13:47.085 [2024-07-15 22:25:00.530862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73659 ] 00:13:47.085 [2024-07-15 22:25:00.674415] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.343 [2024-07-15 22:25:00.753526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.343 [2024-07-15 22:25:00.877087] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:47.343 [2024-07-15 22:25:00.909260] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:47.343 [2024-07-15 22:25:00.909376] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:47.908 22:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.908 22:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:47.908 22:25:01 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:47.908 Running I/O for 10 seconds... 00:13:57.915 00:13:57.915 Latency(us) 00:13:57.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.915 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:57.915 Verification LBA range: start 0x0 length 0x2000 00:13:57.915 TLSTESTn1 : 10.01 5454.22 21.31 0.00 0.00 23434.47 3526.84 24951.06 00:13:57.915 =================================================================================================================== 00:13:57.915 Total : 5454.22 21.31 0.00 0.00 23434.47 3526.84 24951.06 00:13:57.915 0 00:13:57.915 22:25:11 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:57.915 22:25:11 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 73659 00:13:57.915 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73659 ']' 00:13:57.915 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73659 00:13:57.915 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:57.915 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:57.915 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73659 00:13:57.915 killing process with pid 73659 00:13:57.915 Received shutdown signal, test time was about 10.000000 seconds 00:13:57.915 00:13:57.915 Latency(us) 00:13:57.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.915 =================================================================================================================== 00:13:57.915 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:57.915 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:57.915 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:57.915 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73659' 00:13:57.915 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73659 00:13:57.915 [2024-07-15 22:25:11.502216] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:57.915 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73659 00:13:58.174 22:25:11 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 73627 00:13:58.174 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73627 ']' 00:13:58.174 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73627 00:13:58.174 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:58.174 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:58.174 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73627 00:13:58.174 killing process with pid 73627 00:13:58.174 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:58.174 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:58.174 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73627' 00:13:58.174 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73627 00:13:58.174 [2024-07-15 22:25:11.725165] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:58.174 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73627 00:13:58.432 22:25:11 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:13:58.432 22:25:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:58.433 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:58.433 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:58.433 22:25:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73792 00:13:58.433 22:25:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73792 00:13:58.433 22:25:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:58.433 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73792 ']' 00:13:58.433 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.433 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:58.433 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.433 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:58.433 22:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:58.433 [2024-07-15 22:25:11.980572] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:13:58.433 [2024-07-15 22:25:11.980651] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.691 [2024-07-15 22:25:12.125701] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.691 [2024-07-15 22:25:12.220986] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.691 [2024-07-15 22:25:12.221030] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.691 [2024-07-15 22:25:12.221039] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:58.691 [2024-07-15 22:25:12.221047] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:58.691 [2024-07-15 22:25:12.221054] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.691 [2024-07-15 22:25:12.221083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.691 [2024-07-15 22:25:12.262150] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:59.260 22:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:59.260 22:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:59.260 22:25:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:59.260 22:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:59.260 22:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:59.260 22:25:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.260 22:25:12 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.eXtlaEkaqG 00:13:59.260 22:25:12 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.eXtlaEkaqG 00:13:59.260 22:25:12 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:59.518 [2024-07-15 22:25:13.093118] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:59.518 22:25:13 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:59.850 22:25:13 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:00.107 [2024-07-15 22:25:13.516542] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:00.107 [2024-07-15 22:25:13.516810] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.107 22:25:13 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:00.107 malloc0 00:14:00.107 22:25:13 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:00.365 22:25:13 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eXtlaEkaqG 00:14:00.622 [2024-07-15 22:25:14.080851] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:00.622 22:25:14 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=73841 00:14:00.622 22:25:14 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:00.622 22:25:14 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:00.622 22:25:14 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 73841 /var/tmp/bdevperf.sock 00:14:00.622 22:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73841 ']' 00:14:00.622 22:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:00.622 22:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:00.622 22:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:00.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:00.622 22:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:00.622 22:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:00.622 [2024-07-15 22:25:14.148428] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:14:00.622 [2024-07-15 22:25:14.148699] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73841 ] 00:14:00.881 [2024-07-15 22:25:14.290348] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.881 [2024-07-15 22:25:14.381260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.881 [2024-07-15 22:25:14.422241] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:01.447 22:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:01.447 22:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:01.447 22:25:14 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.eXtlaEkaqG 00:14:01.706 22:25:15 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:01.706 [2024-07-15 22:25:15.326718] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:01.964 nvme0n1 00:14:01.964 22:25:15 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:01.965 Running I/O for 1 seconds... 00:14:02.900 00:14:02.900 Latency(us) 00:14:02.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.900 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:02.900 Verification LBA range: start 0x0 length 0x2000 00:14:02.900 nvme0n1 : 1.01 5813.22 22.71 0.00 0.00 21885.94 3526.84 17792.10 00:14:02.900 =================================================================================================================== 00:14:02.900 Total : 5813.22 22.71 0.00 0.00 21885.94 3526.84 17792.10 00:14:02.900 0 00:14:02.900 22:25:16 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 73841 00:14:02.900 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73841 ']' 00:14:02.900 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73841 00:14:02.900 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:02.900 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:03.159 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73841 00:14:03.159 killing process with pid 73841 00:14:03.159 Received shutdown signal, test time was about 1.000000 seconds 00:14:03.159 00:14:03.159 Latency(us) 00:14:03.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.159 =================================================================================================================== 00:14:03.159 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:03.159 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:03.159 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:03.159 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73841' 00:14:03.159 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73841 00:14:03.159 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73841 00:14:03.159 22:25:16 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 73792 00:14:03.159 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73792 ']' 00:14:03.159 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73792 00:14:03.159 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:03.159 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:03.159 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73792 00:14:03.159 killing process with pid 73792 00:14:03.159 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:03.159 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:03.159 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73792' 00:14:03.159 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73792 00:14:03.159 [2024-07-15 22:25:16.784909] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:03.159 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73792 00:14:03.418 22:25:16 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:14:03.418 22:25:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:03.418 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:03.418 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:03.418 22:25:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73891 00:14:03.418 22:25:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73891 00:14:03.418 22:25:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:03.418 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73891 ']' 00:14:03.418 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.418 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:03.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.418 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.418 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:03.418 22:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:03.418 [2024-07-15 22:25:17.039019] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:14:03.418 [2024-07-15 22:25:17.039086] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.676 [2024-07-15 22:25:17.182806] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.676 [2024-07-15 22:25:17.273791] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.676 [2024-07-15 22:25:17.274002] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.676 [2024-07-15 22:25:17.274152] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.676 [2024-07-15 22:25:17.274163] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.676 [2024-07-15 22:25:17.274170] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.676 [2024-07-15 22:25:17.274205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.934 [2024-07-15 22:25:17.316148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:04.498 22:25:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:04.498 22:25:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:04.498 22:25:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:04.498 22:25:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:04.498 22:25:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:04.498 22:25:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.498 22:25:17 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:14:04.498 22:25:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.498 22:25:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:04.498 [2024-07-15 22:25:17.933480] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.498 malloc0 00:14:04.498 [2024-07-15 22:25:17.964799] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:04.498 [2024-07-15 22:25:17.965156] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.498 22:25:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.498 22:25:17 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=73919 00:14:04.498 22:25:17 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:04.498 22:25:17 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 73919 /var/tmp/bdevperf.sock 00:14:04.498 22:25:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73919 ']' 00:14:04.498 22:25:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:04.498 22:25:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:04.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:04.498 22:25:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:04.498 22:25:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:04.498 22:25:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:04.498 [2024-07-15 22:25:18.045624] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:14:04.498 [2024-07-15 22:25:18.045687] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73919 ] 00:14:04.781 [2024-07-15 22:25:18.172671] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.781 [2024-07-15 22:25:18.267307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.781 [2024-07-15 22:25:18.308477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:05.348 22:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:05.348 22:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:05.348 22:25:18 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.eXtlaEkaqG 00:14:05.606 22:25:19 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:05.606 [2024-07-15 22:25:19.202466] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:05.864 nvme0n1 00:14:05.864 22:25:19 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:05.864 Running I/O for 1 seconds... 00:14:06.796 00:14:06.796 Latency(us) 00:14:06.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.796 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:06.796 Verification LBA range: start 0x0 length 0x2000 00:14:06.796 nvme0n1 : 1.01 5561.99 21.73 0.00 0.00 22828.90 5132.34 23477.15 00:14:06.796 =================================================================================================================== 00:14:06.796 Total : 5561.99 21.73 0.00 0.00 22828.90 5132.34 23477.15 00:14:06.796 0 00:14:07.054 22:25:20 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:14:07.054 22:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.054 22:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:07.054 22:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.054 22:25:20 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:14:07.054 "subsystems": [ 00:14:07.054 { 00:14:07.054 "subsystem": "keyring", 00:14:07.054 "config": [ 00:14:07.054 { 00:14:07.054 "method": "keyring_file_add_key", 00:14:07.054 "params": { 00:14:07.054 "name": "key0", 00:14:07.054 "path": "/tmp/tmp.eXtlaEkaqG" 00:14:07.054 } 00:14:07.054 } 00:14:07.054 ] 00:14:07.054 }, 00:14:07.054 { 00:14:07.054 "subsystem": "iobuf", 00:14:07.054 "config": [ 00:14:07.054 { 00:14:07.054 "method": "iobuf_set_options", 00:14:07.054 "params": { 00:14:07.054 "small_pool_count": 8192, 00:14:07.054 "large_pool_count": 1024, 00:14:07.054 "small_bufsize": 8192, 00:14:07.054 "large_bufsize": 135168 00:14:07.054 } 00:14:07.054 } 00:14:07.054 ] 00:14:07.054 }, 00:14:07.054 { 00:14:07.054 "subsystem": "sock", 00:14:07.054 "config": [ 00:14:07.054 { 00:14:07.054 "method": "sock_set_default_impl", 00:14:07.054 "params": { 00:14:07.054 "impl_name": "uring" 00:14:07.054 } 00:14:07.054 }, 00:14:07.054 { 00:14:07.054 "method": "sock_impl_set_options", 00:14:07.054 "params": { 00:14:07.054 "impl_name": "ssl", 00:14:07.054 "recv_buf_size": 4096, 00:14:07.054 "send_buf_size": 4096, 00:14:07.054 "enable_recv_pipe": true, 00:14:07.054 "enable_quickack": false, 00:14:07.054 "enable_placement_id": 0, 00:14:07.054 "enable_zerocopy_send_server": true, 00:14:07.054 "enable_zerocopy_send_client": false, 00:14:07.054 "zerocopy_threshold": 0, 00:14:07.054 "tls_version": 0, 00:14:07.054 "enable_ktls": false 00:14:07.054 } 00:14:07.054 }, 00:14:07.054 { 00:14:07.054 "method": "sock_impl_set_options", 00:14:07.054 "params": { 00:14:07.054 "impl_name": "posix", 00:14:07.054 "recv_buf_size": 2097152, 00:14:07.054 "send_buf_size": 2097152, 00:14:07.054 "enable_recv_pipe": true, 00:14:07.054 "enable_quickack": false, 00:14:07.054 "enable_placement_id": 0, 00:14:07.054 "enable_zerocopy_send_server": true, 00:14:07.054 "enable_zerocopy_send_client": false, 00:14:07.054 "zerocopy_threshold": 0, 00:14:07.054 "tls_version": 0, 00:14:07.054 "enable_ktls": false 00:14:07.054 } 00:14:07.054 }, 00:14:07.054 { 00:14:07.054 "method": "sock_impl_set_options", 00:14:07.054 "params": { 00:14:07.054 "impl_name": "uring", 00:14:07.054 "recv_buf_size": 2097152, 00:14:07.054 "send_buf_size": 2097152, 00:14:07.054 "enable_recv_pipe": true, 00:14:07.054 "enable_quickack": false, 00:14:07.054 "enable_placement_id": 0, 00:14:07.054 "enable_zerocopy_send_server": false, 00:14:07.054 "enable_zerocopy_send_client": false, 00:14:07.054 "zerocopy_threshold": 0, 00:14:07.054 "tls_version": 0, 00:14:07.054 "enable_ktls": false 00:14:07.054 } 00:14:07.054 } 00:14:07.054 ] 00:14:07.054 }, 00:14:07.054 { 00:14:07.054 "subsystem": "vmd", 00:14:07.054 "config": [] 00:14:07.054 }, 00:14:07.054 { 00:14:07.054 "subsystem": "accel", 00:14:07.054 "config": [ 00:14:07.054 { 00:14:07.054 "method": "accel_set_options", 00:14:07.054 "params": { 00:14:07.054 "small_cache_size": 128, 00:14:07.054 "large_cache_size": 16, 00:14:07.054 "task_count": 2048, 00:14:07.054 "sequence_count": 2048, 00:14:07.054 "buf_count": 2048 00:14:07.054 } 00:14:07.054 } 00:14:07.054 ] 00:14:07.054 }, 00:14:07.054 { 00:14:07.054 "subsystem": "bdev", 00:14:07.054 "config": [ 00:14:07.054 { 00:14:07.054 "method": "bdev_set_options", 00:14:07.054 "params": { 00:14:07.054 "bdev_io_pool_size": 65535, 00:14:07.054 "bdev_io_cache_size": 256, 00:14:07.054 "bdev_auto_examine": true, 00:14:07.054 "iobuf_small_cache_size": 128, 00:14:07.054 "iobuf_large_cache_size": 16 00:14:07.054 } 00:14:07.054 }, 00:14:07.054 { 00:14:07.054 "method": "bdev_raid_set_options", 00:14:07.054 "params": { 00:14:07.054 "process_window_size_kb": 1024 00:14:07.054 } 00:14:07.054 }, 00:14:07.054 { 00:14:07.054 "method": "bdev_iscsi_set_options", 00:14:07.054 "params": { 00:14:07.054 "timeout_sec": 30 00:14:07.054 } 00:14:07.054 }, 00:14:07.054 { 00:14:07.054 "method": "bdev_nvme_set_options", 00:14:07.054 "params": { 00:14:07.054 "action_on_timeout": "none", 00:14:07.054 "timeout_us": 0, 00:14:07.054 "timeout_admin_us": 0, 00:14:07.054 "keep_alive_timeout_ms": 10000, 00:14:07.054 "arbitration_burst": 0, 00:14:07.054 "low_priority_weight": 0, 00:14:07.054 "medium_priority_weight": 0, 00:14:07.054 "high_priority_weight": 0, 00:14:07.054 "nvme_adminq_poll_period_us": 10000, 00:14:07.054 "nvme_ioq_poll_period_us": 0, 00:14:07.054 "io_queue_requests": 0, 00:14:07.055 "delay_cmd_submit": true, 00:14:07.055 "transport_retry_count": 4, 00:14:07.055 "bdev_retry_count": 3, 00:14:07.055 "transport_ack_timeout": 0, 00:14:07.055 "ctrlr_loss_timeout_sec": 0, 00:14:07.055 "reconnect_delay_sec": 0, 00:14:07.055 "fast_io_fail_timeout_sec": 0, 00:14:07.055 "disable_auto_failback": false, 00:14:07.055 "generate_uuids": false, 00:14:07.055 "transport_tos": 0, 00:14:07.055 "nvme_error_stat": false, 00:14:07.055 "rdma_srq_size": 0, 00:14:07.055 "io_path_stat": false, 00:14:07.055 "allow_accel_sequence": false, 00:14:07.055 "rdma_max_cq_size": 0, 00:14:07.055 "rdma_cm_event_timeout_ms": 0, 00:14:07.055 "dhchap_digests": [ 00:14:07.055 "sha256", 00:14:07.055 "sha384", 00:14:07.055 "sha512" 00:14:07.055 ], 00:14:07.055 "dhchap_dhgroups": [ 00:14:07.055 "null", 00:14:07.055 "ffdhe2048", 00:14:07.055 "ffdhe3072", 00:14:07.055 "ffdhe4096", 00:14:07.055 "ffdhe6144", 00:14:07.055 "ffdhe8192" 00:14:07.055 ] 00:14:07.055 } 00:14:07.055 }, 00:14:07.055 { 00:14:07.055 "method": "bdev_nvme_set_hotplug", 00:14:07.055 "params": { 00:14:07.055 "period_us": 100000, 00:14:07.055 "enable": false 00:14:07.055 } 00:14:07.055 }, 00:14:07.055 { 00:14:07.055 "method": "bdev_malloc_create", 00:14:07.055 "params": { 00:14:07.055 "name": "malloc0", 00:14:07.055 "num_blocks": 8192, 00:14:07.055 "block_size": 4096, 00:14:07.055 "physical_block_size": 4096, 00:14:07.055 "uuid": "376ae7d3-796b-4de1-b335-b2815bb3777c", 00:14:07.055 "optimal_io_boundary": 0 00:14:07.055 } 00:14:07.055 }, 00:14:07.055 { 00:14:07.055 "method": "bdev_wait_for_examine" 00:14:07.055 } 00:14:07.055 ] 00:14:07.055 }, 00:14:07.055 { 00:14:07.055 "subsystem": "nbd", 00:14:07.055 "config": [] 00:14:07.055 }, 00:14:07.055 { 00:14:07.055 "subsystem": "scheduler", 00:14:07.055 "config": [ 00:14:07.055 { 00:14:07.055 "method": "framework_set_scheduler", 00:14:07.055 "params": { 00:14:07.055 "name": "static" 00:14:07.055 } 00:14:07.055 } 00:14:07.055 ] 00:14:07.055 }, 00:14:07.055 { 00:14:07.055 "subsystem": "nvmf", 00:14:07.055 "config": [ 00:14:07.055 { 00:14:07.055 "method": "nvmf_set_config", 00:14:07.055 "params": { 00:14:07.055 "discovery_filter": "match_any", 00:14:07.055 "admin_cmd_passthru": { 00:14:07.055 "identify_ctrlr": false 00:14:07.055 } 00:14:07.055 } 00:14:07.055 }, 00:14:07.055 { 00:14:07.055 "method": "nvmf_set_max_subsystems", 00:14:07.055 "params": { 00:14:07.055 "max_subsystems": 1024 00:14:07.055 } 00:14:07.055 }, 00:14:07.055 { 00:14:07.055 "method": "nvmf_set_crdt", 00:14:07.055 "params": { 00:14:07.055 "crdt1": 0, 00:14:07.055 "crdt2": 0, 00:14:07.055 "crdt3": 0 00:14:07.055 } 00:14:07.055 }, 00:14:07.055 { 00:14:07.055 "method": "nvmf_create_transport", 00:14:07.055 "params": { 00:14:07.055 "trtype": "TCP", 00:14:07.055 "max_queue_depth": 128, 00:14:07.055 "max_io_qpairs_per_ctrlr": 127, 00:14:07.055 "in_capsule_data_size": 4096, 00:14:07.055 "max_io_size": 131072, 00:14:07.055 "io_unit_size": 131072, 00:14:07.055 "max_aq_depth": 128, 00:14:07.055 "num_shared_buffers": 511, 00:14:07.055 "buf_cache_size": 4294967295, 00:14:07.055 "dif_insert_or_strip": false, 00:14:07.055 "zcopy": false, 00:14:07.055 "c2h_success": false, 00:14:07.055 "sock_priority": 0, 00:14:07.055 "abort_timeout_sec": 1, 00:14:07.055 "ack_timeout": 0, 00:14:07.055 "data_wr_pool_size": 0 00:14:07.055 } 00:14:07.055 }, 00:14:07.055 { 00:14:07.055 "method": "nvmf_create_subsystem", 00:14:07.055 "params": { 00:14:07.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.055 "allow_any_host": false, 00:14:07.055 "serial_number": "00000000000000000000", 00:14:07.055 "model_number": "SPDK bdev Controller", 00:14:07.055 "max_namespaces": 32, 00:14:07.055 "min_cntlid": 1, 00:14:07.055 "max_cntlid": 65519, 00:14:07.055 "ana_reporting": false 00:14:07.055 } 00:14:07.055 }, 00:14:07.055 { 00:14:07.055 "method": "nvmf_subsystem_add_host", 00:14:07.055 "params": { 00:14:07.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.055 "host": "nqn.2016-06.io.spdk:host1", 00:14:07.055 "psk": "key0" 00:14:07.055 } 00:14:07.055 }, 00:14:07.055 { 00:14:07.055 "method": "nvmf_subsystem_add_ns", 00:14:07.055 "params": { 00:14:07.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.055 "namespace": { 00:14:07.055 "nsid": 1, 00:14:07.055 "bdev_name": "malloc0", 00:14:07.055 "nguid": "376AE7D3796B4DE1B335B2815BB3777C", 00:14:07.055 "uuid": "376ae7d3-796b-4de1-b335-b2815bb3777c", 00:14:07.055 "no_auto_visible": false 00:14:07.055 } 00:14:07.055 } 00:14:07.055 }, 00:14:07.055 { 00:14:07.055 "method": "nvmf_subsystem_add_listener", 00:14:07.055 "params": { 00:14:07.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.055 "listen_address": { 00:14:07.055 "trtype": "TCP", 00:14:07.055 "adrfam": "IPv4", 00:14:07.055 "traddr": "10.0.0.2", 00:14:07.055 "trsvcid": "4420" 00:14:07.055 }, 00:14:07.055 "secure_channel": false, 00:14:07.055 "sock_impl": "ssl" 00:14:07.055 } 00:14:07.055 } 00:14:07.055 ] 00:14:07.055 } 00:14:07.055 ] 00:14:07.055 }' 00:14:07.055 22:25:20 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:07.319 22:25:20 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:14:07.319 "subsystems": [ 00:14:07.319 { 00:14:07.319 "subsystem": "keyring", 00:14:07.319 "config": [ 00:14:07.319 { 00:14:07.319 "method": "keyring_file_add_key", 00:14:07.319 "params": { 00:14:07.319 "name": "key0", 00:14:07.319 "path": "/tmp/tmp.eXtlaEkaqG" 00:14:07.319 } 00:14:07.319 } 00:14:07.319 ] 00:14:07.319 }, 00:14:07.319 { 00:14:07.319 "subsystem": "iobuf", 00:14:07.319 "config": [ 00:14:07.319 { 00:14:07.319 "method": "iobuf_set_options", 00:14:07.319 "params": { 00:14:07.319 "small_pool_count": 8192, 00:14:07.319 "large_pool_count": 1024, 00:14:07.319 "small_bufsize": 8192, 00:14:07.319 "large_bufsize": 135168 00:14:07.319 } 00:14:07.319 } 00:14:07.319 ] 00:14:07.319 }, 00:14:07.319 { 00:14:07.319 "subsystem": "sock", 00:14:07.319 "config": [ 00:14:07.319 { 00:14:07.319 "method": "sock_set_default_impl", 00:14:07.319 "params": { 00:14:07.319 "impl_name": "uring" 00:14:07.319 } 00:14:07.319 }, 00:14:07.319 { 00:14:07.319 "method": "sock_impl_set_options", 00:14:07.319 "params": { 00:14:07.319 "impl_name": "ssl", 00:14:07.319 "recv_buf_size": 4096, 00:14:07.319 "send_buf_size": 4096, 00:14:07.319 "enable_recv_pipe": true, 00:14:07.319 "enable_quickack": false, 00:14:07.319 "enable_placement_id": 0, 00:14:07.319 "enable_zerocopy_send_server": true, 00:14:07.319 "enable_zerocopy_send_client": false, 00:14:07.319 "zerocopy_threshold": 0, 00:14:07.319 "tls_version": 0, 00:14:07.319 "enable_ktls": false 00:14:07.319 } 00:14:07.319 }, 00:14:07.319 { 00:14:07.319 "method": "sock_impl_set_options", 00:14:07.319 "params": { 00:14:07.319 "impl_name": "posix", 00:14:07.319 "recv_buf_size": 2097152, 00:14:07.319 "send_buf_size": 2097152, 00:14:07.319 "enable_recv_pipe": true, 00:14:07.319 "enable_quickack": false, 00:14:07.319 "enable_placement_id": 0, 00:14:07.319 "enable_zerocopy_send_server": true, 00:14:07.319 "enable_zerocopy_send_client": false, 00:14:07.319 "zerocopy_threshold": 0, 00:14:07.319 "tls_version": 0, 00:14:07.319 "enable_ktls": false 00:14:07.319 } 00:14:07.319 }, 00:14:07.319 { 00:14:07.319 "method": "sock_impl_set_options", 00:14:07.319 "params": { 00:14:07.319 "impl_name": "uring", 00:14:07.319 "recv_buf_size": 2097152, 00:14:07.319 "send_buf_size": 2097152, 00:14:07.319 "enable_recv_pipe": true, 00:14:07.319 "enable_quickack": false, 00:14:07.319 "enable_placement_id": 0, 00:14:07.320 "enable_zerocopy_send_server": false, 00:14:07.320 "enable_zerocopy_send_client": false, 00:14:07.320 "zerocopy_threshold": 0, 00:14:07.320 "tls_version": 0, 00:14:07.320 "enable_ktls": false 00:14:07.320 } 00:14:07.320 } 00:14:07.320 ] 00:14:07.320 }, 00:14:07.320 { 00:14:07.320 "subsystem": "vmd", 00:14:07.320 "config": [] 00:14:07.320 }, 00:14:07.320 { 00:14:07.320 "subsystem": "accel", 00:14:07.320 "config": [ 00:14:07.320 { 00:14:07.320 "method": "accel_set_options", 00:14:07.320 "params": { 00:14:07.320 "small_cache_size": 128, 00:14:07.320 "large_cache_size": 16, 00:14:07.320 "task_count": 2048, 00:14:07.320 "sequence_count": 2048, 00:14:07.320 "buf_count": 2048 00:14:07.320 } 00:14:07.320 } 00:14:07.320 ] 00:14:07.320 }, 00:14:07.320 { 00:14:07.320 "subsystem": "bdev", 00:14:07.320 "config": [ 00:14:07.320 { 00:14:07.320 "method": "bdev_set_options", 00:14:07.320 "params": { 00:14:07.320 "bdev_io_pool_size": 65535, 00:14:07.320 "bdev_io_cache_size": 256, 00:14:07.320 "bdev_auto_examine": true, 00:14:07.320 "iobuf_small_cache_size": 128, 00:14:07.320 "iobuf_large_cache_size": 16 00:14:07.320 } 00:14:07.320 }, 00:14:07.320 { 00:14:07.320 "method": "bdev_raid_set_options", 00:14:07.320 "params": { 00:14:07.320 "process_window_size_kb": 1024 00:14:07.320 } 00:14:07.320 }, 00:14:07.320 { 00:14:07.320 "method": "bdev_iscsi_set_options", 00:14:07.320 "params": { 00:14:07.320 "timeout_sec": 30 00:14:07.320 } 00:14:07.320 }, 00:14:07.320 { 00:14:07.320 "method": "bdev_nvme_set_options", 00:14:07.320 "params": { 00:14:07.320 "action_on_timeout": "none", 00:14:07.320 "timeout_us": 0, 00:14:07.320 "timeout_admin_us": 0, 00:14:07.320 "keep_alive_timeout_ms": 10000, 00:14:07.320 "arbitration_burst": 0, 00:14:07.320 "low_priority_weight": 0, 00:14:07.320 "medium_priority_weight": 0, 00:14:07.320 "high_priority_weight": 0, 00:14:07.320 "nvme_adminq_poll_period_us": 10000, 00:14:07.320 "nvme_ioq_poll_period_us": 0, 00:14:07.320 "io_queue_requests": 512, 00:14:07.320 "delay_cmd_submit": true, 00:14:07.320 "transport_retry_count": 4, 00:14:07.320 "bdev_retry_count": 3, 00:14:07.320 "transport_ack_timeout": 0, 00:14:07.320 "ctrlr_loss_timeout_sec": 0, 00:14:07.320 "reconnect_delay_sec": 0, 00:14:07.320 "fast_io_fail_timeout_sec": 0, 00:14:07.320 "disable_auto_failback": false, 00:14:07.320 "generate_uuids": false, 00:14:07.320 "transport_tos": 0, 00:14:07.320 "nvme_error_stat": false, 00:14:07.320 "rdma_srq_size": 0, 00:14:07.320 "io_path_stat": false, 00:14:07.320 "allow_accel_sequence": false, 00:14:07.320 "rdma_max_cq_size": 0, 00:14:07.320 "rdma_cm_event_timeout_ms": 0, 00:14:07.320 "dhchap_digests": [ 00:14:07.320 "sha256", 00:14:07.320 "sha384", 00:14:07.320 "sha512" 00:14:07.320 ], 00:14:07.320 "dhchap_dhgroups": [ 00:14:07.320 "null", 00:14:07.320 "ffdhe2048", 00:14:07.320 "ffdhe3072", 00:14:07.320 "ffdhe4096", 00:14:07.320 "ffdhe6144", 00:14:07.320 "ffdhe8192" 00:14:07.320 ] 00:14:07.320 } 00:14:07.320 }, 00:14:07.320 { 00:14:07.320 "method": "bdev_nvme_attach_controller", 00:14:07.320 "params": { 00:14:07.320 "name": "nvme0", 00:14:07.320 "trtype": "TCP", 00:14:07.320 "adrfam": "IPv4", 00:14:07.320 "traddr": "10.0.0.2", 00:14:07.320 "trsvcid": "4420", 00:14:07.320 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.320 "prchk_reftag": false, 00:14:07.320 "prchk_guard": false, 00:14:07.320 "ctrlr_loss_timeout_sec": 0, 00:14:07.320 "reconnect_delay_sec": 0, 00:14:07.320 "fast_io_fail_timeout_sec": 0, 00:14:07.320 "psk": "key0", 00:14:07.320 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:07.320 "hdgst": false, 00:14:07.320 "ddgst": false 00:14:07.320 } 00:14:07.320 }, 00:14:07.320 { 00:14:07.320 "method": "bdev_nvme_set_hotplug", 00:14:07.320 "params": { 00:14:07.320 "period_us": 100000, 00:14:07.320 "enable": false 00:14:07.320 } 00:14:07.320 }, 00:14:07.320 { 00:14:07.320 "method": "bdev_enable_histogram", 00:14:07.320 "params": { 00:14:07.320 "name": "nvme0n1", 00:14:07.320 "enable": true 00:14:07.320 } 00:14:07.320 }, 00:14:07.320 { 00:14:07.320 "method": "bdev_wait_for_examine" 00:14:07.320 } 00:14:07.320 ] 00:14:07.320 }, 00:14:07.320 { 00:14:07.320 "subsystem": "nbd", 00:14:07.320 "config": [] 00:14:07.320 } 00:14:07.320 ] 00:14:07.320 }' 00:14:07.320 22:25:20 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 73919 00:14:07.320 22:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73919 ']' 00:14:07.320 22:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73919 00:14:07.320 22:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:07.320 22:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:07.320 22:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73919 00:14:07.320 killing process with pid 73919 00:14:07.320 Received shutdown signal, test time was about 1.000000 seconds 00:14:07.320 00:14:07.320 Latency(us) 00:14:07.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.320 =================================================================================================================== 00:14:07.320 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:07.320 22:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:07.320 22:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:07.320 22:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73919' 00:14:07.320 22:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73919 00:14:07.320 22:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73919 00:14:07.607 22:25:21 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 73891 00:14:07.607 22:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73891 ']' 00:14:07.607 22:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73891 00:14:07.607 22:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:07.607 22:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:07.607 22:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73891 00:14:07.607 killing process with pid 73891 00:14:07.607 22:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:07.607 22:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:07.607 22:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73891' 00:14:07.607 22:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73891 00:14:07.607 22:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73891 00:14:07.866 22:25:21 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:14:07.866 22:25:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:07.866 22:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:07.866 22:25:21 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:14:07.866 "subsystems": [ 00:14:07.866 { 00:14:07.866 "subsystem": "keyring", 00:14:07.866 "config": [ 00:14:07.866 { 00:14:07.866 "method": "keyring_file_add_key", 00:14:07.866 "params": { 00:14:07.866 "name": "key0", 00:14:07.866 "path": "/tmp/tmp.eXtlaEkaqG" 00:14:07.866 } 00:14:07.866 } 00:14:07.866 ] 00:14:07.866 }, 00:14:07.866 { 00:14:07.866 "subsystem": "iobuf", 00:14:07.866 "config": [ 00:14:07.866 { 00:14:07.866 "method": "iobuf_set_options", 00:14:07.866 "params": { 00:14:07.866 "small_pool_count": 8192, 00:14:07.866 "large_pool_count": 1024, 00:14:07.866 "small_bufsize": 8192, 00:14:07.866 "large_bufsize": 135168 00:14:07.866 } 00:14:07.866 } 00:14:07.866 ] 00:14:07.866 }, 00:14:07.866 { 00:14:07.866 "subsystem": "sock", 00:14:07.866 "config": [ 00:14:07.866 { 00:14:07.866 "method": "sock_set_default_impl", 00:14:07.866 "params": { 00:14:07.866 "impl_name": "uring" 00:14:07.866 } 00:14:07.866 }, 00:14:07.866 { 00:14:07.866 "method": "sock_impl_set_options", 00:14:07.866 "params": { 00:14:07.866 "impl_name": "ssl", 00:14:07.866 "recv_buf_size": 4096, 00:14:07.866 "send_buf_size": 4096, 00:14:07.866 "enable_recv_pipe": true, 00:14:07.866 "enable_quickack": false, 00:14:07.866 "enable_placement_id": 0, 00:14:07.866 "enable_zerocopy_send_server": true, 00:14:07.866 "enable_zerocopy_send_client": false, 00:14:07.866 "zerocopy_threshold": 0, 00:14:07.866 "tls_version": 0, 00:14:07.866 "enable_ktls": false 00:14:07.866 } 00:14:07.866 }, 00:14:07.866 { 00:14:07.866 "method": "sock_impl_set_options", 00:14:07.866 "params": { 00:14:07.866 "impl_name": "posix", 00:14:07.866 "recv_buf_size": 2097152, 00:14:07.866 "send_buf_size": 2097152, 00:14:07.866 "enable_recv_pipe": true, 00:14:07.866 "enable_quickack": false, 00:14:07.866 "enable_placement_id": 0, 00:14:07.866 "enable_zerocopy_send_server": true, 00:14:07.866 "enable_zerocopy_send_client": false, 00:14:07.866 "zerocopy_threshold": 0, 00:14:07.866 "tls_version": 0, 00:14:07.866 "enable_ktls": false 00:14:07.866 } 00:14:07.866 }, 00:14:07.866 { 00:14:07.866 "method": "sock_impl_set_options", 00:14:07.866 "params": { 00:14:07.866 "impl_name": "uring", 00:14:07.866 "recv_buf_size": 2097152, 00:14:07.866 "send_buf_size": 2097152, 00:14:07.866 "enable_recv_pipe": true, 00:14:07.866 "enable_quickack": false, 00:14:07.866 "enable_placement_id": 0, 00:14:07.866 "enable_zerocopy_send_server": false, 00:14:07.866 "enable_zerocopy_send_client": false, 00:14:07.866 "zerocopy_threshold": 0, 00:14:07.866 "tls_version": 0, 00:14:07.866 "enable_ktls": false 00:14:07.866 } 00:14:07.866 } 00:14:07.866 ] 00:14:07.866 }, 00:14:07.866 { 00:14:07.866 "subsystem": "vmd", 00:14:07.866 "config": [] 00:14:07.866 }, 00:14:07.866 { 00:14:07.866 "subsystem": "accel", 00:14:07.866 "config": [ 00:14:07.866 { 00:14:07.866 "method": "accel_set_options", 00:14:07.866 "params": { 00:14:07.866 "small_cache_size": 128, 00:14:07.866 "large_cache_size": 16, 00:14:07.866 "task_count": 2048, 00:14:07.866 "sequence_count": 2048, 00:14:07.866 "buf_count": 2048 00:14:07.866 } 00:14:07.866 } 00:14:07.866 ] 00:14:07.866 }, 00:14:07.866 { 00:14:07.866 "subsystem": "bdev", 00:14:07.866 "config": [ 00:14:07.866 { 00:14:07.866 "method": "bdev_set_options", 00:14:07.866 "params": { 00:14:07.866 "bdev_io_pool_size": 65535, 00:14:07.866 "bdev_io_cache_size": 256, 00:14:07.866 "bdev_auto_examine": true, 00:14:07.866 "iobuf_small_cache_size": 128, 00:14:07.866 "iobuf_large_cache_size": 16 00:14:07.866 } 00:14:07.866 }, 00:14:07.866 { 00:14:07.866 "method": "bdev_raid_set_options", 00:14:07.866 "params": { 00:14:07.866 "process_window_size_kb": 1024 00:14:07.866 } 00:14:07.866 }, 00:14:07.866 { 00:14:07.866 "method": "bdev_iscsi_set_options", 00:14:07.866 "params": { 00:14:07.866 "timeout_sec": 30 00:14:07.866 } 00:14:07.866 }, 00:14:07.866 { 00:14:07.866 "method": "bdev_nvme_set_options", 00:14:07.866 "params": { 00:14:07.866 "action_on_timeout": "none", 00:14:07.866 "timeout_us": 0, 00:14:07.866 "timeout_admin_us": 0, 00:14:07.866 "keep_alive_timeout_ms": 10000, 00:14:07.866 "arbitration_burst": 0, 00:14:07.867 "low_priority_weight": 0, 00:14:07.867 "medium_priority_weight": 0, 00:14:07.867 "high_priority_weight": 0, 00:14:07.867 "nvme_adminq_poll_period_us": 10000, 00:14:07.867 "nvme_ioq_poll_period_us": 0, 00:14:07.867 "io_queue_requests": 0, 00:14:07.867 "delay_cmd_submit": true, 00:14:07.867 "transport_retry_count": 4, 00:14:07.867 "bdev_retry_count": 3, 00:14:07.867 "transport_ack_timeout": 0, 00:14:07.867 "ctrlr_loss_timeout_sec": 0, 00:14:07.867 "reconnect_delay_sec": 0, 00:14:07.867 "fast_io_fail_timeout_sec": 0, 00:14:07.867 "disable_auto_failback": false, 00:14:07.867 "generate_uuids": false, 00:14:07.867 "transport_tos": 0, 00:14:07.867 "nvme_error_stat": false, 00:14:07.867 "rdma_srq_size": 0, 00:14:07.867 "io_path_stat": false, 00:14:07.867 "allow_accel_sequence": false, 00:14:07.867 "rdma_max_cq_size": 0, 00:14:07.867 "rdma_cm_event_timeout_ms": 0, 00:14:07.867 "dhchap_digests": [ 00:14:07.867 "sha256", 00:14:07.867 "sha384", 00:14:07.867 "sha512" 00:14:07.867 ], 00:14:07.867 "dhchap_dhgroups": [ 00:14:07.867 "null", 00:14:07.867 "ffdhe2048", 00:14:07.867 "ffdhe3072", 00:14:07.867 "ffdhe4096", 00:14:07.867 "ffdhe6144", 00:14:07.867 "ffdhe8192" 00:14:07.867 ] 00:14:07.867 } 00:14:07.867 }, 00:14:07.867 { 00:14:07.867 "method": "bdev_nvme_set_hotplug", 00:14:07.867 "params": { 00:14:07.867 "period_us": 100000, 00:14:07.867 "enable": false 00:14:07.867 } 00:14:07.867 }, 00:14:07.867 { 00:14:07.867 "method": "bdev_malloc_create", 00:14:07.867 "params": { 00:14:07.867 "name": "malloc0", 00:14:07.867 "num_blocks": 8192, 00:14:07.867 "block_size": 4096, 00:14:07.867 "physical_block_size": 4096, 00:14:07.867 "uuid": "376ae7d3-796b-4de1-b335-b2815bb3777c", 00:14:07.867 "optimal_io_boundary": 0 00:14:07.867 } 00:14:07.867 }, 00:14:07.867 { 00:14:07.867 "method": "bdev_wait_for_examine" 00:14:07.867 } 00:14:07.867 ] 00:14:07.867 }, 00:14:07.867 { 00:14:07.867 "subsystem": "nbd", 00:14:07.867 "config": [] 00:14:07.867 }, 00:14:07.867 { 00:14:07.867 "subsystem": "scheduler", 00:14:07.867 "config": [ 00:14:07.867 { 00:14:07.867 "method": "framework_set_scheduler", 00:14:07.867 "params": { 00:14:07.867 "name": "static" 00:14:07.867 } 00:14:07.867 } 00:14:07.867 ] 00:14:07.867 }, 00:14:07.867 { 00:14:07.867 "subsystem": "nvmf", 00:14:07.867 "config": [ 00:14:07.867 { 00:14:07.867 "method": "nvmf_set_config", 00:14:07.867 "params": { 00:14:07.867 "discovery_filter": "match_any", 00:14:07.867 "admin_cmd_passthru": { 00:14:07.867 "identify_ctrlr": false 00:14:07.867 } 00:14:07.867 } 00:14:07.867 }, 00:14:07.867 { 00:14:07.867 "method": "nvmf_set_max_subsystems", 00:14:07.867 "params": { 00:14:07.867 "max_subsystems": 1024 00:14:07.867 } 00:14:07.867 }, 00:14:07.867 { 00:14:07.867 "method": "nvmf_set_crdt", 00:14:07.867 "params": { 00:14:07.867 "crdt1": 0, 00:14:07.867 "crdt2": 0, 00:14:07.867 "crdt3": 0 00:14:07.867 } 00:14:07.867 }, 00:14:07.867 { 00:14:07.867 "method": "nvmf_create_transport", 00:14:07.867 "params": { 00:14:07.867 "trtype": "TCP", 00:14:07.867 "max_queue_depth": 128, 00:14:07.867 "max_io_qpairs_per_ctrlr": 127, 00:14:07.867 "in_capsule_data_size": 4096, 00:14:07.867 "max_io_size": 131072, 00:14:07.867 "io_unit_size": 131072, 00:14:07.867 "max_aq_depth": 128, 00:14:07.867 "num_shared_buffers": 511, 00:14:07.867 "buf_cache_size": 4294967295, 00:14:07.867 "dif_insert_or_strip": false, 00:14:07.867 "zcopy": false, 00:14:07.867 "c2h_success": false, 00:14:07.867 "sock_priority": 0, 00:14:07.867 "abort_timeout_sec": 1, 00:14:07.867 "ack_timeout": 0, 00:14:07.867 "data_wr_pool_size": 0 00:14:07.867 } 00:14:07.867 }, 00:14:07.867 { 00:14:07.867 "method": "nvmf_create_subsystem", 00:14:07.867 "params": { 00:14:07.867 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.867 "allow_any_host": false, 00:14:07.867 "serial_number": "00000000000000000000", 00:14:07.867 "model_number": "SPDK bdev Controller", 00:14:07.867 "max_namespaces": 32, 00:14:07.867 "min_cntlid": 1, 00:14:07.867 "max_cntlid": 65519, 00:14:07.867 "ana_reporting": false 00:14:07.867 } 00:14:07.867 }, 00:14:07.867 { 00:14:07.867 "method": "nvmf_subsystem_add_host", 00:14:07.867 "params": { 00:14:07.867 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.867 "host": "nqn.2016-06.io.spdk:host1", 00:14:07.867 "psk": "key0" 00:14:07.867 } 00:14:07.867 }, 00:14:07.867 { 00:14:07.867 "method": "nvmf_subsystem_add_ns", 00:14:07.867 "params": { 00:14:07.867 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.867 "namespace": { 00:14:07.867 "nsid": 1, 00:14:07.867 "bdev_name": "malloc0", 00:14:07.867 "nguid": "376AE7D3796B4DE1B335B2815BB3777C", 00:14:07.867 "uuid": "376ae7d3-796b-4de1-b335-b2815bb3777c", 00:14:07.867 "no_auto_visible": false 00:14:07.867 } 00:14:07.867 } 00:14:07.867 }, 00:14:07.867 { 00:14:07.867 "method": "nvmf_subsystem_add_listener", 00:14:07.867 "params": { 00:14:07.867 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.867 "listen_address": { 00:14:07.867 "trtype": "TCP", 00:14:07.867 "adrfam": "IPv4", 00:14:07.867 "traddr": "10.0.0.2", 00:14:07.867 "trsvcid": "4420" 00:14:07.867 }, 00:14:07.867 "secure_channel": false, 00:14:07.867 "sock_impl": "ssl" 00:14:07.867 } 00:14:07.867 } 00:14:07.867 ] 00:14:07.867 } 00:14:07.867 ] 00:14:07.867 }' 00:14:07.867 22:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:07.867 22:25:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73974 00:14:07.867 22:25:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:07.867 22:25:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73974 00:14:07.867 22:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73974 ']' 00:14:07.867 22:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.867 22:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:07.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.867 22:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.867 22:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:07.867 22:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:07.867 [2024-07-15 22:25:21.361340] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:14:07.867 [2024-07-15 22:25:21.361405] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.126 [2024-07-15 22:25:21.505226] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.126 [2024-07-15 22:25:21.592488] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.126 [2024-07-15 22:25:21.592534] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.126 [2024-07-15 22:25:21.592543] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.126 [2024-07-15 22:25:21.592551] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.126 [2024-07-15 22:25:21.592558] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.126 [2024-07-15 22:25:21.592645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.126 [2024-07-15 22:25:21.746784] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:08.384 [2024-07-15 22:25:21.814522] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.384 [2024-07-15 22:25:21.846434] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:08.384 [2024-07-15 22:25:21.846592] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.643 22:25:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:08.643 22:25:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:08.643 22:25:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:08.643 22:25:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:08.643 22:25:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.643 22:25:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.643 22:25:22 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=74006 00:14:08.643 22:25:22 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 74006 /var/tmp/bdevperf.sock 00:14:08.643 22:25:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74006 ']' 00:14:08.643 22:25:22 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:08.643 22:25:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:08.643 22:25:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:08.643 22:25:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:08.643 22:25:22 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:14:08.643 "subsystems": [ 00:14:08.643 { 00:14:08.643 "subsystem": "keyring", 00:14:08.643 "config": [ 00:14:08.643 { 00:14:08.643 "method": "keyring_file_add_key", 00:14:08.643 "params": { 00:14:08.643 "name": "key0", 00:14:08.643 "path": "/tmp/tmp.eXtlaEkaqG" 00:14:08.643 } 00:14:08.643 } 00:14:08.643 ] 00:14:08.643 }, 00:14:08.643 { 00:14:08.643 "subsystem": "iobuf", 00:14:08.643 "config": [ 00:14:08.643 { 00:14:08.643 "method": "iobuf_set_options", 00:14:08.643 "params": { 00:14:08.643 "small_pool_count": 8192, 00:14:08.643 "large_pool_count": 1024, 00:14:08.643 "small_bufsize": 8192, 00:14:08.643 "large_bufsize": 135168 00:14:08.643 } 00:14:08.643 } 00:14:08.643 ] 00:14:08.643 }, 00:14:08.643 { 00:14:08.643 "subsystem": "sock", 00:14:08.643 "config": [ 00:14:08.643 { 00:14:08.643 "method": "sock_set_default_impl", 00:14:08.643 "params": { 00:14:08.643 "impl_name": "uring" 00:14:08.643 } 00:14:08.643 }, 00:14:08.643 { 00:14:08.643 "method": "sock_impl_set_options", 00:14:08.643 "params": { 00:14:08.643 "impl_name": "ssl", 00:14:08.643 "recv_buf_size": 4096, 00:14:08.643 "send_buf_size": 4096, 00:14:08.643 "enable_recv_pipe": true, 00:14:08.643 "enable_quickack": false, 00:14:08.643 "enable_placement_id": 0, 00:14:08.643 "enable_zerocopy_send_server": true, 00:14:08.643 "enable_zerocopy_send_client": false, 00:14:08.643 "zerocopy_threshold": 0, 00:14:08.643 "tls_version": 0, 00:14:08.643 "enable_ktls": false 00:14:08.643 } 00:14:08.643 }, 00:14:08.643 { 00:14:08.643 "method": "sock_impl_set_options", 00:14:08.643 "params": { 00:14:08.643 "impl_name": "posix", 00:14:08.643 "recv_buf_size": 2097152, 00:14:08.643 "send_buf_size": 2097152, 00:14:08.643 "enable_recv_pipe": true, 00:14:08.643 "enable_quickack": false, 00:14:08.643 "enable_placement_id": 0, 00:14:08.643 "enable_zerocopy_send_server": true, 00:14:08.643 "enable_zerocopy_send_client": false, 00:14:08.643 "zerocopy_threshold": 0, 00:14:08.643 "tls_version": 0, 00:14:08.643 "enable_ktls": false 00:14:08.643 } 00:14:08.643 }, 00:14:08.643 { 00:14:08.643 "method": "sock_impl_set_options", 00:14:08.643 "params": { 00:14:08.643 "impl_name": "uring", 00:14:08.643 "recv_buf_size": 2097152, 00:14:08.643 "send_buf_size": 2097152, 00:14:08.643 "enable_recv_pipe": true, 00:14:08.643 "enable_quickack": false, 00:14:08.643 "enable_placement_id": 0, 00:14:08.643 "enable_zerocopy_send_server": false, 00:14:08.643 "enable_zerocopy_send_client": false, 00:14:08.643 "zerocopy_threshold": 0, 00:14:08.643 "tls_version": 0, 00:14:08.643 "enable_ktls": false 00:14:08.643 } 00:14:08.643 } 00:14:08.643 ] 00:14:08.643 }, 00:14:08.643 { 00:14:08.643 "subsystem": "vmd", 00:14:08.643 "config": [] 00:14:08.643 }, 00:14:08.643 { 00:14:08.643 "subsystem": "accel", 00:14:08.643 "config": [ 00:14:08.643 { 00:14:08.643 "method": "accel_set_options", 00:14:08.643 "params": { 00:14:08.643 "small_cache_size": 128, 00:14:08.643 "large_cache_size": 16, 00:14:08.643 "task_count": 2048, 00:14:08.643 "sequence_count": 2048, 00:14:08.643 "buf_count": 2048 00:14:08.643 } 00:14:08.643 } 00:14:08.643 ] 00:14:08.643 }, 00:14:08.643 { 00:14:08.643 "subsystem": "bdev", 00:14:08.643 "config": [ 00:14:08.643 { 00:14:08.643 "method": "bdev_set_options", 00:14:08.643 "params": { 00:14:08.643 "bdev_io_pool_size": 65535, 00:14:08.643 "bdev_io_cache_size": 256, 00:14:08.643 "bdev_auto_examine": true, 00:14:08.643 "iobuf_small_cache_size": 128, 00:14:08.643 "iobuf_large_cache_size": 16 00:14:08.643 } 00:14:08.643 }, 00:14:08.643 { 00:14:08.643 "method": "bdev_raid_set_options", 00:14:08.643 "params": { 00:14:08.643 "process_window_size_kb": 1024 00:14:08.643 } 00:14:08.643 }, 00:14:08.643 { 00:14:08.643 "method": "bdev_iscsi_set_options", 00:14:08.643 "params": { 00:14:08.643 "timeout_sec": 30 00:14:08.643 } 00:14:08.643 }, 00:14:08.643 { 00:14:08.643 "method": "bdev_nvme_set_options", 00:14:08.643 "params": { 00:14:08.643 "action_on_timeout": "none", 00:14:08.643 "timeout_us": 0, 00:14:08.643 "timeout_admin_us": 0, 00:14:08.643 "keep_alive_timeout_ms": 10000, 00:14:08.643 "arbitration_burst": 0, 00:14:08.643 "low_priority_weight": 0, 00:14:08.643 "medium_priority_weight": 0, 00:14:08.643 "high_priority_weight": 0, 00:14:08.643 "nvme_adminq_poll_period_us": 10000, 00:14:08.643 "nvme_ioq_poll_period_us": 0, 00:14:08.643 "io_queue_requests": 512, 00:14:08.643 "delay_cmd_submit": true, 00:14:08.643 "transport_retry_count": 4, 00:14:08.643 "bdev_retry_count": 3, 00:14:08.643 "transport_ack_timeout": 0, 00:14:08.643 "ctrlr_loss_timeout_sec": 0, 00:14:08.643 "reconnect_delay_sec": 0, 00:14:08.643 "fast_io_fail_timeout_sec": 0, 00:14:08.643 "disable_auto_failback": false, 00:14:08.643 "generate_uuids": false, 00:14:08.643 "transport_tos": 0, 00:14:08.643 "nvme_error_stat": false, 00:14:08.643 "rdma_srq_size": 0, 00:14:08.643 "io_path_stat": false, 00:14:08.643 "allow_accel_sequence": false, 00:14:08.643 "rdma_max_cq_size": 0, 00:14:08.643 "rdma_cm_event_timeout_ms": 0, 00:14:08.643 "dhchap_digests": [ 00:14:08.643 "sha256", 00:14:08.643 "sha384", 00:14:08.643 "sha512" 00:14:08.643 ], 00:14:08.643 "dhchap_dhgroups": [ 00:14:08.643 "null", 00:14:08.643 "ffdhe2048", 00:14:08.643 "ffdhe3072", 00:14:08.643 "ffdhe4096", 00:14:08.643 "ffdhe6144", 00:14:08.644 "ffdhe8192" 00:14:08.644 ] 00:14:08.644 } 00:14:08.644 }, 00:14:08.644 { 00:14:08.644 "method": "bdev_nvme_attach_controller", 00:14:08.644 "params": { 00:14:08.644 "name": "nvme0", 00:14:08.644 "trtype": "TCP", 00:14:08.644 "adrfam": "IPv4", 00:14:08.644 "traddr": "10.0.0.2", 00:14:08.644 "trsvcid": "4420", 00:14:08.644 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:08.644 "prchk_reftag": false, 00:14:08.644 "prchk_guard": false, 00:14:08.644 "ctrlr_loss_timeout_sec": 0, 00:14:08.644 "reconnect_delay_sec": 0, 00:14:08.644 "fast_io_fail_timeout_sec": 0, 00:14:08.644 "psk": "key0", 00:14:08.644 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:08.644 "hdgst": false, 00:14:08.644 "ddgst": false 00:14:08.644 } 00:14:08.644 }, 00:14:08.644 { 00:14:08.644 "method": "bdev_nvme_set_hotplug", 00:14:08.644 "params": { 00:14:08.644 "period_us": 100000, 00:14:08.644 "enable": false 00:14:08.644 } 00:14:08.644 }, 00:14:08.644 { 00:14:08.644 "method": "bdev_enable_histogram", 00:14:08.644 "params": { 00:14:08.644 "name": "nvme0n1", 00:14:08.644 "enable": true 00:14:08.644 } 00:14:08.644 }, 00:14:08.644 { 00:14:08.644 "method": "bdev_wait_for_examine" 00:14:08.644 } 00:14:08.644 ] 00:14:08.644 }, 00:14:08.644 { 00:14:08.644 "subsystem": "nbd", 00:14:08.644 "config": [] 00:14:08.644 } 00:14:08.644 ] 00:14:08.644 }' 00:14:08.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:08.644 22:25:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:08.644 22:25:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.902 [2024-07-15 22:25:22.304064] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:14:08.902 [2024-07-15 22:25:22.304267] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74006 ] 00:14:08.902 [2024-07-15 22:25:22.441171] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.902 [2024-07-15 22:25:22.526915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.159 [2024-07-15 22:25:22.649106] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:09.159 [2024-07-15 22:25:22.696174] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:09.727 22:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:09.727 22:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:09.727 22:25:23 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:14:09.727 22:25:23 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:09.727 22:25:23 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.727 22:25:23 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:09.986 Running I/O for 1 seconds... 00:14:10.984 00:14:10.984 Latency(us) 00:14:10.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.984 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:10.984 Verification LBA range: start 0x0 length 0x2000 00:14:10.984 nvme0n1 : 1.01 5874.19 22.95 0.00 0.00 21633.67 4474.35 17792.10 00:14:10.984 =================================================================================================================== 00:14:10.984 Total : 5874.19 22.95 0.00 0.00 21633.67 4474.35 17792.10 00:14:10.984 0 00:14:10.984 22:25:24 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:14:10.984 22:25:24 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:14:10.984 22:25:24 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:10.984 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:14:10.984 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:14:10.984 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:10.984 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:10.984 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:10.984 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:10.984 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:10.984 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:10.984 nvmf_trace.0 00:14:10.984 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:14:10.984 22:25:24 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 74006 00:14:10.984 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74006 ']' 00:14:10.984 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74006 00:14:10.984 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:10.984 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:10.984 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74006 00:14:10.984 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:10.984 killing process with pid 74006 00:14:10.984 Received shutdown signal, test time was about 1.000000 seconds 00:14:10.984 00:14:10.984 Latency(us) 00:14:10.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.984 =================================================================================================================== 00:14:10.984 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:10.985 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:10.985 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74006' 00:14:10.985 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74006 00:14:10.985 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74006 00:14:11.242 22:25:24 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:11.242 22:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:11.242 22:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:14:11.242 22:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:11.242 22:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:14:11.242 22:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:11.242 22:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:11.242 rmmod nvme_tcp 00:14:11.242 rmmod nvme_fabrics 00:14:11.242 rmmod nvme_keyring 00:14:11.500 22:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:11.500 22:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:14:11.500 22:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:14:11.500 22:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 73974 ']' 00:14:11.500 22:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 73974 00:14:11.500 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73974 ']' 00:14:11.500 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73974 00:14:11.500 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:11.500 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:11.500 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73974 00:14:11.500 killing process with pid 73974 00:14:11.500 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:11.500 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:11.500 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73974' 00:14:11.500 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73974 00:14:11.500 22:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73974 00:14:11.500 22:25:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:11.500 22:25:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:11.500 22:25:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:11.500 22:25:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:11.500 22:25:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:11.500 22:25:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.500 22:25:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.500 22:25:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.757 22:25:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:11.757 22:25:25 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.StnTv3Po7b /tmp/tmp.Z0drO0PlUc /tmp/tmp.eXtlaEkaqG 00:14:11.757 00:14:11.757 real 1m20.198s 00:14:11.757 user 2m0.072s 00:14:11.757 sys 0m29.736s 00:14:11.757 ************************************ 00:14:11.757 END TEST nvmf_tls 00:14:11.757 ************************************ 00:14:11.757 22:25:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:11.757 22:25:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:11.757 22:25:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:11.757 22:25:25 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:11.757 22:25:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:11.757 22:25:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:11.757 22:25:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:11.757 ************************************ 00:14:11.757 START TEST nvmf_fips 00:14:11.757 ************************************ 00:14:11.757 22:25:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:11.757 * Looking for test storage... 00:14:11.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:11.757 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:11.757 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:12.015 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.015 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.015 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.015 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.015 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.015 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.015 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.015 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.015 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.015 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.015 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:14:12.015 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:14:12.015 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.015 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.015 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:12.015 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.015 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:12.015 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.015 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:14:12.016 22:25:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:14:12.016 Error setting digest 00:14:12.016 00E21259837F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:14:12.016 00E21259837F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:12.275 Cannot find device "nvmf_tgt_br" 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:12.275 Cannot find device "nvmf_tgt_br2" 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:12.275 Cannot find device "nvmf_tgt_br" 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:12.275 Cannot find device "nvmf_tgt_br2" 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:12.275 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:12.275 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:12.275 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:12.534 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:12.534 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:12.534 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:12.534 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:12.534 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:12.534 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:12.534 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:12.534 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:12.534 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:12.534 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:12.534 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:12.534 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:12.534 22:25:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:12.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:12.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:14:12.534 00:14:12.534 --- 10.0.0.2 ping statistics --- 00:14:12.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.534 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:12.534 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:12.534 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:14:12.534 00:14:12.534 --- 10.0.0.3 ping statistics --- 00:14:12.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.534 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:12.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:12.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:14:12.534 00:14:12.534 --- 10.0.0.1 ping statistics --- 00:14:12.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.534 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=74280 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 74280 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74280 ']' 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:12.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:12.534 22:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:12.534 [2024-07-15 22:25:26.149138] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:14:12.534 [2024-07-15 22:25:26.149207] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.793 [2024-07-15 22:25:26.291947] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.793 [2024-07-15 22:25:26.376183] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.793 [2024-07-15 22:25:26.376229] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.793 [2024-07-15 22:25:26.376238] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.793 [2024-07-15 22:25:26.376247] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.793 [2024-07-15 22:25:26.376254] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.793 [2024-07-15 22:25:26.376277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.793 [2024-07-15 22:25:26.416860] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:13.360 22:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.360 22:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:14:13.360 22:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:13.360 22:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:13.360 22:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:13.619 22:25:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.619 22:25:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:14:13.619 22:25:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:13.619 22:25:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:13.619 22:25:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:13.619 22:25:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:13.619 22:25:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:13.619 22:25:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:13.619 22:25:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:13.619 [2024-07-15 22:25:27.198767] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.619 [2024-07-15 22:25:27.214685] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:13.619 [2024-07-15 22:25:27.214968] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.619 [2024-07-15 22:25:27.243405] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:13.619 malloc0 00:14:13.877 22:25:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:13.877 22:25:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=74314 00:14:13.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:13.877 22:25:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 74314 /var/tmp/bdevperf.sock 00:14:13.877 22:25:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:13.877 22:25:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74314 ']' 00:14:13.877 22:25:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:13.877 22:25:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.877 22:25:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:13.877 22:25:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.877 22:25:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:13.877 [2024-07-15 22:25:27.344891] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:14:13.877 [2024-07-15 22:25:27.344951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74314 ] 00:14:13.877 [2024-07-15 22:25:27.487786] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.137 [2024-07-15 22:25:27.579927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.137 [2024-07-15 22:25:27.622110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:14.744 22:25:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:14.744 22:25:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:14:14.744 22:25:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:14.744 [2024-07-15 22:25:28.308999] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:14.744 [2024-07-15 22:25:28.309100] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:15.002 TLSTESTn1 00:14:15.002 22:25:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:15.002 Running I/O for 10 seconds... 00:14:25.051 00:14:25.051 Latency(us) 00:14:25.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.051 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:25.051 Verification LBA range: start 0x0 length 0x2000 00:14:25.051 TLSTESTn1 : 10.01 5450.96 21.29 0.00 0.00 23448.34 3197.84 19687.12 00:14:25.051 =================================================================================================================== 00:14:25.051 Total : 5450.96 21.29 0.00 0.00 23448.34 3197.84 19687.12 00:14:25.051 0 00:14:25.051 22:25:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:25.051 22:25:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:25.051 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:14:25.051 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:14:25.051 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:25.051 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:25.051 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:25.051 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:25.051 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:25.051 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:25.051 nvmf_trace.0 00:14:25.051 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:14:25.051 22:25:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 74314 00:14:25.051 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74314 ']' 00:14:25.051 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74314 00:14:25.051 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:14:25.051 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:25.051 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74314 00:14:25.051 killing process with pid 74314 00:14:25.051 Received shutdown signal, test time was about 10.000000 seconds 00:14:25.051 00:14:25.051 Latency(us) 00:14:25.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.051 =================================================================================================================== 00:14:25.051 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:25.051 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:25.051 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:25.051 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74314' 00:14:25.051 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74314 00:14:25.051 [2024-07-15 22:25:38.650723] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:25.051 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74314 00:14:25.310 22:25:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:25.310 22:25:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:25.310 22:25:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:14:25.310 22:25:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:25.310 22:25:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:14:25.310 22:25:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:25.310 22:25:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:25.310 rmmod nvme_tcp 00:14:25.310 rmmod nvme_fabrics 00:14:25.310 rmmod nvme_keyring 00:14:25.310 22:25:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:25.569 22:25:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:14:25.569 22:25:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:14:25.569 22:25:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 74280 ']' 00:14:25.569 22:25:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 74280 00:14:25.569 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74280 ']' 00:14:25.569 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74280 00:14:25.569 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:14:25.569 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:25.569 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74280 00:14:25.569 killing process with pid 74280 00:14:25.569 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:25.569 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:25.569 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74280' 00:14:25.569 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74280 00:14:25.569 [2024-07-15 22:25:38.969448] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:25.569 22:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74280 00:14:25.569 22:25:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:25.569 22:25:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:25.569 22:25:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:25.569 22:25:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:25.569 22:25:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:25.569 22:25:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.569 22:25:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:25.569 22:25:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.828 22:25:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:25.828 22:25:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:25.828 ************************************ 00:14:25.828 END TEST nvmf_fips 00:14:25.828 ************************************ 00:14:25.828 00:14:25.828 real 0m13.957s 00:14:25.828 user 0m17.770s 00:14:25.828 sys 0m6.253s 00:14:25.828 22:25:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:25.828 22:25:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:25.828 22:25:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:25.828 22:25:39 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:14:25.828 22:25:39 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:14:25.828 22:25:39 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:14:25.828 22:25:39 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:25.828 22:25:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:25.828 22:25:39 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:14:25.828 22:25:39 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:25.828 22:25:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:25.828 22:25:39 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:14:25.828 22:25:39 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:25.828 22:25:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:25.828 22:25:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:25.828 22:25:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:25.828 ************************************ 00:14:25.828 START TEST nvmf_identify 00:14:25.828 ************************************ 00:14:25.828 22:25:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:26.087 * Looking for test storage... 00:14:26.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:26.087 22:25:39 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:26.087 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:14:26.087 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.087 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.087 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.087 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.087 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:26.087 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:26.087 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.087 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:26.088 Cannot find device "nvmf_tgt_br" 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:26.088 Cannot find device "nvmf_tgt_br2" 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:26.088 Cannot find device "nvmf_tgt_br" 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:26.088 Cannot find device "nvmf_tgt_br2" 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:26.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:26.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:26.088 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:26.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:14:26.348 00:14:26.348 --- 10.0.0.2 ping statistics --- 00:14:26.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.348 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:26.348 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:26.348 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:14:26.348 00:14:26.348 --- 10.0.0.3 ping statistics --- 00:14:26.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.348 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:26.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:14:26.348 00:14:26.348 --- 10.0.0.1 ping statistics --- 00:14:26.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.348 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74664 00:14:26.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74664 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 74664 ']' 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:26.348 22:25:39 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:26.608 [2024-07-15 22:25:39.980791] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:14:26.608 [2024-07-15 22:25:39.980859] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.608 [2024-07-15 22:25:40.125917] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:26.608 [2024-07-15 22:25:40.219064] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.608 [2024-07-15 22:25:40.219265] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.608 [2024-07-15 22:25:40.219425] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.608 [2024-07-15 22:25:40.219470] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.608 [2024-07-15 22:25:40.219495] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.608 [2024-07-15 22:25:40.219725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.608 [2024-07-15 22:25:40.219922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.608 [2024-07-15 22:25:40.220085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.608 [2024-07-15 22:25:40.220310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:26.892 [2024-07-15 22:25:40.262562] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:27.460 22:25:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:27.460 22:25:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:14:27.460 22:25:40 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:27.460 22:25:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.460 22:25:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:27.460 [2024-07-15 22:25:40.877748] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.460 22:25:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.460 22:25:40 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:27.460 22:25:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:27.460 22:25:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:27.460 22:25:40 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:27.460 22:25:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.460 22:25:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:27.460 Malloc0 00:14:27.460 22:25:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.460 22:25:40 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:27.460 22:25:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.460 22:25:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:27.460 22:25:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.460 22:25:40 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:27.460 22:25:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.460 22:25:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:27.460 22:25:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.460 22:25:40 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:27.460 22:25:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.460 22:25:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:27.460 [2024-07-15 22:25:41.001006] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.460 22:25:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.460 22:25:41 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:27.460 22:25:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.460 22:25:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:27.460 22:25:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.460 22:25:41 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:27.460 22:25:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.460 22:25:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:27.460 [ 00:14:27.460 { 00:14:27.460 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:27.460 "subtype": "Discovery", 00:14:27.460 "listen_addresses": [ 00:14:27.460 { 00:14:27.460 "trtype": "TCP", 00:14:27.460 "adrfam": "IPv4", 00:14:27.460 "traddr": "10.0.0.2", 00:14:27.460 "trsvcid": "4420" 00:14:27.460 } 00:14:27.460 ], 00:14:27.460 "allow_any_host": true, 00:14:27.460 "hosts": [] 00:14:27.460 }, 00:14:27.460 { 00:14:27.460 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:27.460 "subtype": "NVMe", 00:14:27.460 "listen_addresses": [ 00:14:27.460 { 00:14:27.460 "trtype": "TCP", 00:14:27.460 "adrfam": "IPv4", 00:14:27.460 "traddr": "10.0.0.2", 00:14:27.460 "trsvcid": "4420" 00:14:27.460 } 00:14:27.460 ], 00:14:27.460 "allow_any_host": true, 00:14:27.460 "hosts": [], 00:14:27.460 "serial_number": "SPDK00000000000001", 00:14:27.460 "model_number": "SPDK bdev Controller", 00:14:27.460 "max_namespaces": 32, 00:14:27.460 "min_cntlid": 1, 00:14:27.460 "max_cntlid": 65519, 00:14:27.460 "namespaces": [ 00:14:27.460 { 00:14:27.460 "nsid": 1, 00:14:27.460 "bdev_name": "Malloc0", 00:14:27.460 "name": "Malloc0", 00:14:27.460 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:27.460 "eui64": "ABCDEF0123456789", 00:14:27.460 "uuid": "8a82c517-4c9b-4e8f-aff5-726171260917" 00:14:27.460 } 00:14:27.460 ] 00:14:27.460 } 00:14:27.460 ] 00:14:27.460 22:25:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.460 22:25:41 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:27.460 [2024-07-15 22:25:41.072876] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:14:27.460 [2024-07-15 22:25:41.072921] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74699 ] 00:14:27.722 [2024-07-15 22:25:41.209881] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:14:27.722 [2024-07-15 22:25:41.209942] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:27.722 [2024-07-15 22:25:41.209947] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:27.722 [2024-07-15 22:25:41.209957] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:27.722 [2024-07-15 22:25:41.209963] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:27.722 [2024-07-15 22:25:41.210195] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:14:27.722 [2024-07-15 22:25:41.210235] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2462510 0 00:14:27.722 [2024-07-15 22:25:41.224615] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:27.722 [2024-07-15 22:25:41.224635] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:27.722 [2024-07-15 22:25:41.224640] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:27.722 [2024-07-15 22:25:41.224644] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:27.722 [2024-07-15 22:25:41.224682] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.722 [2024-07-15 22:25:41.224687] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.722 [2024-07-15 22:25:41.224691] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2462510) 00:14:27.722 [2024-07-15 22:25:41.224717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:27.722 [2024-07-15 22:25:41.224745] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c4f00, cid 0, qid 0 00:14:27.722 [2024-07-15 22:25:41.232612] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.722 [2024-07-15 22:25:41.232627] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.722 [2024-07-15 22:25:41.232631] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.722 [2024-07-15 22:25:41.232636] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c4f00) on tqpair=0x2462510 00:14:27.722 [2024-07-15 22:25:41.232644] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:27.722 [2024-07-15 22:25:41.232651] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:14:27.722 [2024-07-15 22:25:41.232657] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:14:27.722 [2024-07-15 22:25:41.232673] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.722 [2024-07-15 22:25:41.232677] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.722 [2024-07-15 22:25:41.232681] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2462510) 00:14:27.722 [2024-07-15 22:25:41.232689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.722 [2024-07-15 22:25:41.232708] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c4f00, cid 0, qid 0 00:14:27.722 [2024-07-15 22:25:41.232752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.722 [2024-07-15 22:25:41.232758] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.722 [2024-07-15 22:25:41.232762] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.722 [2024-07-15 22:25:41.232765] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c4f00) on tqpair=0x2462510 00:14:27.722 [2024-07-15 22:25:41.232770] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:14:27.722 [2024-07-15 22:25:41.232777] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:14:27.722 [2024-07-15 22:25:41.232784] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.722 [2024-07-15 22:25:41.232787] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.722 [2024-07-15 22:25:41.232791] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2462510) 00:14:27.722 [2024-07-15 22:25:41.232797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.722 [2024-07-15 22:25:41.232810] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c4f00, cid 0, qid 0 00:14:27.722 [2024-07-15 22:25:41.232851] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.722 [2024-07-15 22:25:41.232857] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.722 [2024-07-15 22:25:41.232861] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.722 [2024-07-15 22:25:41.232865] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c4f00) on tqpair=0x2462510 00:14:27.722 [2024-07-15 22:25:41.232870] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:14:27.722 [2024-07-15 22:25:41.232877] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:14:27.722 [2024-07-15 22:25:41.232884] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.722 [2024-07-15 22:25:41.232888] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.722 [2024-07-15 22:25:41.232891] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2462510) 00:14:27.722 [2024-07-15 22:25:41.232897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.722 [2024-07-15 22:25:41.232909] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c4f00, cid 0, qid 0 00:14:27.722 [2024-07-15 22:25:41.232943] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.722 [2024-07-15 22:25:41.232948] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.722 [2024-07-15 22:25:41.232952] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.722 [2024-07-15 22:25:41.232955] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c4f00) on tqpair=0x2462510 00:14:27.722 [2024-07-15 22:25:41.232960] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:27.722 [2024-07-15 22:25:41.232968] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.722 [2024-07-15 22:25:41.232972] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.722 [2024-07-15 22:25:41.232976] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2462510) 00:14:27.722 [2024-07-15 22:25:41.232982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.722 [2024-07-15 22:25:41.232994] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c4f00, cid 0, qid 0 00:14:27.723 [2024-07-15 22:25:41.233032] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.723 [2024-07-15 22:25:41.233038] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.723 [2024-07-15 22:25:41.233041] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233045] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c4f00) on tqpair=0x2462510 00:14:27.723 [2024-07-15 22:25:41.233050] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:14:27.723 [2024-07-15 22:25:41.233055] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:14:27.723 [2024-07-15 22:25:41.233062] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:27.723 [2024-07-15 22:25:41.233167] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:14:27.723 [2024-07-15 22:25:41.233172] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:27.723 [2024-07-15 22:25:41.233181] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233184] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233188] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2462510) 00:14:27.723 [2024-07-15 22:25:41.233194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.723 [2024-07-15 22:25:41.233206] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c4f00, cid 0, qid 0 00:14:27.723 [2024-07-15 22:25:41.233245] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.723 [2024-07-15 22:25:41.233251] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.723 [2024-07-15 22:25:41.233254] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233258] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c4f00) on tqpair=0x2462510 00:14:27.723 [2024-07-15 22:25:41.233262] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:27.723 [2024-07-15 22:25:41.233271] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233275] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233278] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2462510) 00:14:27.723 [2024-07-15 22:25:41.233284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.723 [2024-07-15 22:25:41.233296] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c4f00, cid 0, qid 0 00:14:27.723 [2024-07-15 22:25:41.233344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.723 [2024-07-15 22:25:41.233350] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.723 [2024-07-15 22:25:41.233354] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233357] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c4f00) on tqpair=0x2462510 00:14:27.723 [2024-07-15 22:25:41.233362] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:27.723 [2024-07-15 22:25:41.233367] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:14:27.723 [2024-07-15 22:25:41.233374] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:14:27.723 [2024-07-15 22:25:41.233382] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:14:27.723 [2024-07-15 22:25:41.233391] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233395] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2462510) 00:14:27.723 [2024-07-15 22:25:41.233401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.723 [2024-07-15 22:25:41.233414] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c4f00, cid 0, qid 0 00:14:27.723 [2024-07-15 22:25:41.233490] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:27.723 [2024-07-15 22:25:41.233496] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:27.723 [2024-07-15 22:25:41.233500] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233504] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2462510): datao=0, datal=4096, cccid=0 00:14:27.723 [2024-07-15 22:25:41.233509] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24c4f00) on tqpair(0x2462510): expected_datao=0, payload_size=4096 00:14:27.723 [2024-07-15 22:25:41.233514] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233520] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233524] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233532] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.723 [2024-07-15 22:25:41.233538] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.723 [2024-07-15 22:25:41.233541] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c4f00) on tqpair=0x2462510 00:14:27.723 [2024-07-15 22:25:41.233552] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:14:27.723 [2024-07-15 22:25:41.233557] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:14:27.723 [2024-07-15 22:25:41.233565] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:14:27.723 [2024-07-15 22:25:41.233570] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:14:27.723 [2024-07-15 22:25:41.233575] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:14:27.723 [2024-07-15 22:25:41.233580] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:14:27.723 [2024-07-15 22:25:41.233587] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:14:27.723 [2024-07-15 22:25:41.233594] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233608] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233612] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2462510) 00:14:27.723 [2024-07-15 22:25:41.233618] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:27.723 [2024-07-15 22:25:41.233632] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c4f00, cid 0, qid 0 00:14:27.723 [2024-07-15 22:25:41.233675] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.723 [2024-07-15 22:25:41.233681] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.723 [2024-07-15 22:25:41.233684] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233688] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c4f00) on tqpair=0x2462510 00:14:27.723 [2024-07-15 22:25:41.233695] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233702] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2462510) 00:14:27.723 [2024-07-15 22:25:41.233708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.723 [2024-07-15 22:25:41.233714] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233717] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233721] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2462510) 00:14:27.723 [2024-07-15 22:25:41.233726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.723 [2024-07-15 22:25:41.233732] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233736] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233739] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2462510) 00:14:27.723 [2024-07-15 22:25:41.233745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.723 [2024-07-15 22:25:41.233750] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233754] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233757] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462510) 00:14:27.723 [2024-07-15 22:25:41.233763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.723 [2024-07-15 22:25:41.233767] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:14:27.723 [2024-07-15 22:25:41.233778] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:27.723 [2024-07-15 22:25:41.233784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233788] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2462510) 00:14:27.723 [2024-07-15 22:25:41.233794] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.723 [2024-07-15 22:25:41.233808] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c4f00, cid 0, qid 0 00:14:27.723 [2024-07-15 22:25:41.233813] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5080, cid 1, qid 0 00:14:27.723 [2024-07-15 22:25:41.233817] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5200, cid 2, qid 0 00:14:27.723 [2024-07-15 22:25:41.233822] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5380, cid 3, qid 0 00:14:27.723 [2024-07-15 22:25:41.233826] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5500, cid 4, qid 0 00:14:27.723 [2024-07-15 22:25:41.233893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.723 [2024-07-15 22:25:41.233898] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.723 [2024-07-15 22:25:41.233902] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233905] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5500) on tqpair=0x2462510 00:14:27.723 [2024-07-15 22:25:41.233910] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:14:27.723 [2024-07-15 22:25:41.233915] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:14:27.723 [2024-07-15 22:25:41.233924] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.723 [2024-07-15 22:25:41.233928] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2462510) 00:14:27.723 [2024-07-15 22:25:41.233934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.724 [2024-07-15 22:25:41.233946] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5500, cid 4, qid 0 00:14:27.724 [2024-07-15 22:25:41.233990] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:27.724 [2024-07-15 22:25:41.233996] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:27.724 [2024-07-15 22:25:41.233999] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:27.724 [2024-07-15 22:25:41.234003] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2462510): datao=0, datal=4096, cccid=4 00:14:27.724 [2024-07-15 22:25:41.234008] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24c5500) on tqpair(0x2462510): expected_datao=0, payload_size=4096 00:14:27.724 [2024-07-15 22:25:41.234012] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.724 [2024-07-15 22:25:41.234018] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:27.724 [2024-07-15 22:25:41.234022] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:27.724 [2024-07-15 22:25:41.234029] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.724 [2024-07-15 22:25:41.234035] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.724 [2024-07-15 22:25:41.234038] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.724 [2024-07-15 22:25:41.234042] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5500) on tqpair=0x2462510 00:14:27.724 [2024-07-15 22:25:41.234052] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:14:27.724 [2024-07-15 22:25:41.234076] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.724 [2024-07-15 22:25:41.234081] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2462510) 00:14:27.724 [2024-07-15 22:25:41.234086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.724 [2024-07-15 22:25:41.234093] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.724 [2024-07-15 22:25:41.234096] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.724 [2024-07-15 22:25:41.234100] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2462510) 00:14:27.724 [2024-07-15 22:25:41.234105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.724 [2024-07-15 22:25:41.234122] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5500, cid 4, qid 0 00:14:27.724 [2024-07-15 22:25:41.234127] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5680, cid 5, qid 0 00:14:27.724 [2024-07-15 22:25:41.234208] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:27.724 [2024-07-15 22:25:41.234213] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:27.724 [2024-07-15 22:25:41.234217] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:27.724 [2024-07-15 22:25:41.234221] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2462510): datao=0, datal=1024, cccid=4 00:14:27.724 [2024-07-15 22:25:41.234225] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24c5500) on tqpair(0x2462510): expected_datao=0, payload_size=1024 00:14:27.724 [2024-07-15 22:25:41.234230] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.724 [2024-07-15 22:25:41.234236] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:27.724 [2024-07-15 22:25:41.234239] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:27.724 [2024-07-15 22:25:41.234245] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.724 [2024-07-15 22:25:41.234250] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.724 [2024-07-15 22:25:41.234253] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.724 [2024-07-15 22:25:41.234257] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5680) on tqpair=0x2462510 00:14:27.724 [2024-07-15 22:25:41.234270] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.724 [2024-07-15 22:25:41.234276] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.724 [2024-07-15 22:25:41.234279] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.724 [2024-07-15 22:25:41.234283] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5500) on tqpair=0x2462510 00:14:27.724 [2024-07-15 22:25:41.234293] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.724 [2024-07-15 22:25:41.234297] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2462510) 00:14:27.724 [2024-07-15 22:25:41.234303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.724 [2024-07-15 22:25:41.234319] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5500, cid 4, qid 0 00:14:27.724 [2024-07-15 22:25:41.234368] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:27.724 [2024-07-15 22:25:41.234374] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:27.724 [2024-07-15 22:25:41.234377] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:27.724 [2024-07-15 22:25:41.234381] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2462510): datao=0, datal=3072, cccid=4 00:14:27.724 [2024-07-15 22:25:41.234386] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24c5500) on tqpair(0x2462510): expected_datao=0, payload_size=3072 00:14:27.724 [2024-07-15 22:25:41.234390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.724 [2024-07-15 22:25:41.234396] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:27.724 [2024-07-15 22:25:41.234400] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:27.724 [2024-07-15 22:25:41.234407] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.724 [2024-07-15 22:25:41.234412] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.724 [2024-07-15 22:25:41.234416] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.724 [2024-07-15 22:25:41.234420] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5500) on tqpair=0x2462510 00:14:27.724 [2024-07-15 22:25:41.234427] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.724 [2024-07-15 22:25:41.234431] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2462510) 00:14:27.724 [2024-07-15 22:25:41.234437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.724 [2024-07-15 22:25:41.234453] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5500, cid 4, qid 0 00:14:27.724 ===================================================== 00:14:27.724 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:27.724 ===================================================== 00:14:27.724 Controller Capabilities/Features 00:14:27.724 ================================ 00:14:27.724 Vendor ID: 0000 00:14:27.724 Subsystem Vendor ID: 0000 00:14:27.724 Serial Number: .................... 00:14:27.724 Model Number: ........................................ 00:14:27.724 Firmware Version: 24.09 00:14:27.724 Recommended Arb Burst: 0 00:14:27.724 IEEE OUI Identifier: 00 00 00 00:14:27.724 Multi-path I/O 00:14:27.724 May have multiple subsystem ports: No 00:14:27.724 May have multiple controllers: No 00:14:27.724 Associated with SR-IOV VF: No 00:14:27.724 Max Data Transfer Size: 131072 00:14:27.724 Max Number of Namespaces: 0 00:14:27.724 Max Number of I/O Queues: 1024 00:14:27.724 NVMe Specification Version (VS): 1.3 00:14:27.724 NVMe Specification Version (Identify): 1.3 00:14:27.724 Maximum Queue Entries: 128 00:14:27.724 Contiguous Queues Required: Yes 00:14:27.724 Arbitration Mechanisms Supported 00:14:27.724 Weighted Round Robin: Not Supported 00:14:27.724 Vendor Specific: Not Supported 00:14:27.724 Reset Timeout: 15000 ms 00:14:27.724 Doorbell Stride: 4 bytes 00:14:27.724 NVM Subsystem Reset: Not Supported 00:14:27.724 Command Sets Supported 00:14:27.724 NVM Command Set: Supported 00:14:27.724 Boot Partition: Not Supported 00:14:27.724 Memory Page Size Minimum: 4096 bytes 00:14:27.724 Memory Page Size Maximum: 4096 bytes 00:14:27.724 Persistent Memory Region: Not Supported 00:14:27.724 Optional Asynchronous Events Supported 00:14:27.724 Namespace Attribute Notices: Not Supported 00:14:27.724 Firmware Activation Notices: Not Supported 00:14:27.724 ANA Change Notices: Not Supported 00:14:27.724 PLE Aggregate Log Change Notices: Not Supported 00:14:27.724 LBA Status Info Alert Notices: Not Supported 00:14:27.724 EGE Aggregate Log Change Notices: Not Supported 00:14:27.724 Normal NVM Subsystem Shutdown event: Not Supported 00:14:27.724 Zone Descriptor Change Notices: Not Supported 00:14:27.724 Discovery Log Change Notices: Supported 00:14:27.724 Controller Attributes 00:14:27.724 128-bit Host Identifier: Not Supported 00:14:27.724 Non-Operational Permissive Mode: Not Supported 00:14:27.724 NVM Sets: Not Supported 00:14:27.724 Read Recovery Levels: Not Supported 00:14:27.724 Endurance Groups: Not Supported 00:14:27.724 Predictable Latency Mode: Not Supported 00:14:27.724 Traffic Based Keep ALive: Not Supported 00:14:27.724 Namespace Granularity: Not Supported 00:14:27.724 SQ Associations: Not Supported 00:14:27.724 UUID List: Not Supported 00:14:27.724 Multi-Domain Subsystem: Not Supported 00:14:27.724 Fixed Capacity Management: Not Supported 00:14:27.724 Variable Capacity Management: Not Supported 00:14:27.724 Delete Endurance Group: Not Supported 00:14:27.724 Delete NVM Set: Not Supported 00:14:27.724 Extended LBA Formats Supported: Not Supported 00:14:27.724 Flexible Data Placement Supported: Not Supported 00:14:27.724 00:14:27.724 Controller Memory Buffer Support 00:14:27.724 ================================ 00:14:27.724 Supported: No 00:14:27.724 00:14:27.725 Persistent Memory Region Support 00:14:27.725 ================================ 00:14:27.725 Supported: No 00:14:27.725 00:14:27.725 Admin Command Set Attributes 00:14:27.725 ============================ 00:14:27.725 Security Send/Receive: Not Supported 00:14:27.725 Format NVM: Not Supported 00:14:27.725 Firmware Activate/Download: Not Supported 00:14:27.725 Namespace Management: Not Supported 00:14:27.725 Device Self-Test: Not Supported 00:14:27.725 Directives: Not Supported 00:14:27.725 NVMe-MI: Not Supported 00:14:27.725 Virtualization Management: Not Supported 00:14:27.725 Doorbell Buffer Config: Not Supported 00:14:27.725 Get LBA Status Capability: Not Supported 00:14:27.725 Command & Feature Lockdown Capability: Not Supported 00:14:27.725 Abort Command Limit: 1 00:14:27.725 Async Event Request Limit: 4 00:14:27.725 Number of Firmware Slots: N/A 00:14:27.725 Firmware Slot 1 Read-Only: N/A 00:14:27.725 Firmware Activation Without Reset: N/A 00:14:27.725 Multiple Update Detection Support: N/A 00:14:27.725 Firmware Update Granularity: No Information Provided 00:14:27.725 Per-Namespace SMART Log: No 00:14:27.725 Asymmetric Namespace Access Log Page: Not Supported 00:14:27.725 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:27.725 Command Effects Log Page: Not Supported 00:14:27.725 Get Log Page Extended Data: Supported 00:14:27.725 Telemetry Log Pages: Not Supported 00:14:27.725 Persistent Event Log Pages: Not Supported 00:14:27.725 Supported Log Pages Log Page: May Support 00:14:27.725 Commands Supported & Effects Log Page: Not Supported 00:14:27.725 Feature Identifiers & Effects Log Page:May Support 00:14:27.725 NVMe-MI Commands & Effects Log Page: May Support 00:14:27.725 Data Area 4 for Telemetry Log: Not Supported 00:14:27.725 Error Log Page Entries Supported: 128 00:14:27.725 Keep Alive: Not Supported 00:14:27.725 00:14:27.725 NVM Command Set Attributes 00:14:27.725 ========================== 00:14:27.725 Submission Queue Entry Size 00:14:27.725 Max: 1 00:14:27.725 Min: 1 00:14:27.725 Completion Queue Entry Size 00:14:27.725 Max: 1 00:14:27.725 Min: 1 00:14:27.725 Number of Namespaces: 0 00:14:27.725 Compare Command: Not Supported 00:14:27.725 Write Uncorrectable Command: Not Supported 00:14:27.725 Dataset Management Command: Not Supported 00:14:27.725 Write Zeroes Command: Not Supported 00:14:27.725 Set Features Save Field: Not Supported 00:14:27.725 Reservations: Not Supported 00:14:27.725 Timestamp: Not Supported 00:14:27.725 Copy: Not Supported 00:14:27.725 Volatile Write Cache: Not Present 00:14:27.725 Atomic Write Unit (Normal): 1 00:14:27.725 Atomic Write Unit (PFail): 1 00:14:27.725 Atomic Compare & Write Unit: 1 00:14:27.725 Fused Compare & Write: Supported 00:14:27.725 Scatter-Gather List 00:14:27.725 SGL Command Set: Supported 00:14:27.725 SGL Keyed: Supported 00:14:27.725 SGL Bit Bucket Descriptor: Not Supported 00:14:27.725 SGL Metadata Pointer: Not Supported 00:14:27.725 Oversized SGL: Not Supported 00:14:27.725 SGL Metadata Address: Not Supported 00:14:27.725 SGL Offset: Supported 00:14:27.725 Transport SGL Data Block: Not Supported 00:14:27.725 Replay Protected Memory Block: Not Supported 00:14:27.725 00:14:27.725 Firmware Slot Information 00:14:27.725 ========================= 00:14:27.725 Active slot: 0 00:14:27.725 00:14:27.725 00:14:27.725 Error Log 00:14:27.725 ========= 00:14:27.725 00:14:27.725 Active Namespaces 00:14:27.725 ================= 00:14:27.725 Discovery Log Page 00:14:27.725 ================== 00:14:27.725 Generation Counter: 2 00:14:27.725 Number of Records: 2 00:14:27.725 Record Format: 0 00:14:27.725 00:14:27.725 Discovery Log Entry 0 00:14:27.725 ---------------------- 00:14:27.725 Transport Type: 3 (TCP) 00:14:27.725 Address Family: 1 (IPv4) 00:14:27.725 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:27.725 Entry Flags: 00:14:27.725 Duplicate Returned Information: 1 00:14:27.725 Explicit Persistent Connection Support for Discovery: 1 00:14:27.725 Transport Requirements: 00:14:27.725 Secure Channel: Not Required 00:14:27.725 Port ID: 0 (0x0000) 00:14:27.725 Controller ID: 65535 (0xffff) 00:14:27.725 Admin Max SQ Size: 128 00:14:27.725 Transport Service Identifier: 4420 00:14:27.725 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:27.725 Transport Address: 10.0.0.2 00:14:27.725 Discovery Log Entry 1 00:14:27.725 ---------------------- 00:14:27.725 Transport Type: 3 (TCP) 00:14:27.725 Address Family: 1 (IPv4) 00:14:27.725 Subsystem Type: 2 (NVM Subsystem) 00:14:27.725 Entry Flags: 00:14:27.725 Duplicate Returned Information: 0 00:14:27.725 Explicit Persistent Connection Support for Discovery: 0 00:14:27.725 Transport Requirements: 00:14:27.725 Secure Channel: Not Required 00:14:27.725 Port ID: 0 (0x0000) 00:14:27.725 Controller ID: 65535 (0xffff) 00:14:27.725 Admin Max SQ Size: 128 00:14:27.725 Transport Service Identifier: 4420 00:14:27.725 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:27.725 Transport Address: 10.0.0.2 [2024-07-15 22:25:41.234498] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:27.725 [2024-07-15 22:25:41.234504] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:27.725 [2024-07-15 22:25:41.234507] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:27.725 [2024-07-15 22:25:41.234511] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2462510): datao=0, datal=8, cccid=4 00:14:27.725 [2024-07-15 22:25:41.234515] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24c5500) on tqpair(0x2462510): expected_datao=0, payload_size=8 00:14:27.725 [2024-07-15 22:25:41.234520] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.725 [2024-07-15 22:25:41.234526] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:27.725 [2024-07-15 22:25:41.234529] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:27.725 [2024-07-15 22:25:41.234540] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.725 [2024-07-15 22:25:41.234546] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.725 [2024-07-15 22:25:41.234549] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.725 [2024-07-15 22:25:41.234553] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5500) on tqpair=0x2462510 00:14:27.725 [2024-07-15 22:25:41.234663] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:14:27.725 [2024-07-15 22:25:41.234676] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c4f00) on tqpair=0x2462510 00:14:27.725 [2024-07-15 22:25:41.234683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.725 [2024-07-15 22:25:41.234688] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5080) on tqpair=0x2462510 00:14:27.725 [2024-07-15 22:25:41.234693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.725 [2024-07-15 22:25:41.234698] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5200) on tqpair=0x2462510 00:14:27.725 [2024-07-15 22:25:41.234702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.725 [2024-07-15 22:25:41.234707] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5380) on tqpair=0x2462510 00:14:27.725 [2024-07-15 22:25:41.234712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.725 [2024-07-15 22:25:41.234720] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.725 [2024-07-15 22:25:41.234724] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.725 [2024-07-15 22:25:41.234727] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462510) 00:14:27.725 [2024-07-15 22:25:41.234734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.725 [2024-07-15 22:25:41.234752] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5380, cid 3, qid 0 00:14:27.725 [2024-07-15 22:25:41.234795] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.725 [2024-07-15 22:25:41.234801] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.725 [2024-07-15 22:25:41.234805] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.725 [2024-07-15 22:25:41.234809] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5380) on tqpair=0x2462510 00:14:27.725 [2024-07-15 22:25:41.234818] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.725 [2024-07-15 22:25:41.234822] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.725 [2024-07-15 22:25:41.234826] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462510) 00:14:27.725 [2024-07-15 22:25:41.234832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.725 [2024-07-15 22:25:41.234847] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5380, cid 3, qid 0 00:14:27.725 [2024-07-15 22:25:41.234894] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.725 [2024-07-15 22:25:41.234900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.725 [2024-07-15 22:25:41.234904] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.725 [2024-07-15 22:25:41.234907] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5380) on tqpair=0x2462510 00:14:27.725 [2024-07-15 22:25:41.234912] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:14:27.725 [2024-07-15 22:25:41.234917] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:14:27.725 [2024-07-15 22:25:41.234925] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.725 [2024-07-15 22:25:41.234929] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.234933] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462510) 00:14:27.726 [2024-07-15 22:25:41.234939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.726 [2024-07-15 22:25:41.234951] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5380, cid 3, qid 0 00:14:27.726 [2024-07-15 22:25:41.234985] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.726 [2024-07-15 22:25:41.234990] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.726 [2024-07-15 22:25:41.234994] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.234997] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5380) on tqpair=0x2462510 00:14:27.726 [2024-07-15 22:25:41.235006] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235010] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235013] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462510) 00:14:27.726 [2024-07-15 22:25:41.235019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.726 [2024-07-15 22:25:41.235032] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5380, cid 3, qid 0 00:14:27.726 [2024-07-15 22:25:41.235065] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.726 [2024-07-15 22:25:41.235071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.726 [2024-07-15 22:25:41.235074] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235078] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5380) on tqpair=0x2462510 00:14:27.726 [2024-07-15 22:25:41.235086] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235090] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235094] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462510) 00:14:27.726 [2024-07-15 22:25:41.235100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.726 [2024-07-15 22:25:41.235112] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5380, cid 3, qid 0 00:14:27.726 [2024-07-15 22:25:41.235146] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.726 [2024-07-15 22:25:41.235151] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.726 [2024-07-15 22:25:41.235155] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235159] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5380) on tqpair=0x2462510 00:14:27.726 [2024-07-15 22:25:41.235167] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235174] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462510) 00:14:27.726 [2024-07-15 22:25:41.235180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.726 [2024-07-15 22:25:41.235192] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5380, cid 3, qid 0 00:14:27.726 [2024-07-15 22:25:41.235226] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.726 [2024-07-15 22:25:41.235232] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.726 [2024-07-15 22:25:41.235235] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235239] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5380) on tqpair=0x2462510 00:14:27.726 [2024-07-15 22:25:41.235247] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235251] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235255] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462510) 00:14:27.726 [2024-07-15 22:25:41.235261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.726 [2024-07-15 22:25:41.235273] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5380, cid 3, qid 0 00:14:27.726 [2024-07-15 22:25:41.235315] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.726 [2024-07-15 22:25:41.235320] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.726 [2024-07-15 22:25:41.235324] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235327] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5380) on tqpair=0x2462510 00:14:27.726 [2024-07-15 22:25:41.235336] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235340] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235343] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462510) 00:14:27.726 [2024-07-15 22:25:41.235349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.726 [2024-07-15 22:25:41.235361] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5380, cid 3, qid 0 00:14:27.726 [2024-07-15 22:25:41.235403] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.726 [2024-07-15 22:25:41.235409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.726 [2024-07-15 22:25:41.235412] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235416] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5380) on tqpair=0x2462510 00:14:27.726 [2024-07-15 22:25:41.235424] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235428] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235432] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462510) 00:14:27.726 [2024-07-15 22:25:41.235438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.726 [2024-07-15 22:25:41.235450] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5380, cid 3, qid 0 00:14:27.726 [2024-07-15 22:25:41.235489] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.726 [2024-07-15 22:25:41.235495] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.726 [2024-07-15 22:25:41.235498] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235502] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5380) on tqpair=0x2462510 00:14:27.726 [2024-07-15 22:25:41.235510] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235518] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462510) 00:14:27.726 [2024-07-15 22:25:41.235523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.726 [2024-07-15 22:25:41.235536] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5380, cid 3, qid 0 00:14:27.726 [2024-07-15 22:25:41.235572] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.726 [2024-07-15 22:25:41.235578] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.726 [2024-07-15 22:25:41.235581] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235585] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5380) on tqpair=0x2462510 00:14:27.726 [2024-07-15 22:25:41.235593] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235608] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235612] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462510) 00:14:27.726 [2024-07-15 22:25:41.235618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.726 [2024-07-15 22:25:41.235631] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5380, cid 3, qid 0 00:14:27.726 [2024-07-15 22:25:41.235665] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.726 [2024-07-15 22:25:41.235670] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.726 [2024-07-15 22:25:41.235674] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235678] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5380) on tqpair=0x2462510 00:14:27.726 [2024-07-15 22:25:41.235686] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235690] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235693] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462510) 00:14:27.726 [2024-07-15 22:25:41.235699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.726 [2024-07-15 22:25:41.235711] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5380, cid 3, qid 0 00:14:27.726 [2024-07-15 22:25:41.235746] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.726 [2024-07-15 22:25:41.235751] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.726 [2024-07-15 22:25:41.235755] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235759] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5380) on tqpair=0x2462510 00:14:27.726 [2024-07-15 22:25:41.235767] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235771] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235775] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462510) 00:14:27.726 [2024-07-15 22:25:41.235781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.726 [2024-07-15 22:25:41.235793] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5380, cid 3, qid 0 00:14:27.726 [2024-07-15 22:25:41.235829] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.726 [2024-07-15 22:25:41.235835] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.726 [2024-07-15 22:25:41.235838] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235842] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5380) on tqpair=0x2462510 00:14:27.726 [2024-07-15 22:25:41.235850] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235854] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.726 [2024-07-15 22:25:41.235858] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462510) 00:14:27.726 [2024-07-15 22:25:41.235864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.727 [2024-07-15 22:25:41.235876] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5380, cid 3, qid 0 00:14:27.727 [2024-07-15 22:25:41.235912] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.727 [2024-07-15 22:25:41.235918] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.727 [2024-07-15 22:25:41.235921] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.235925] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5380) on tqpair=0x2462510 00:14:27.727 [2024-07-15 22:25:41.235933] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.235937] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.235941] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462510) 00:14:27.727 [2024-07-15 22:25:41.235947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.727 [2024-07-15 22:25:41.235959] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5380, cid 3, qid 0 00:14:27.727 [2024-07-15 22:25:41.235995] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.727 [2024-07-15 22:25:41.236001] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.727 [2024-07-15 22:25:41.236004] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.236008] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5380) on tqpair=0x2462510 00:14:27.727 [2024-07-15 22:25:41.236016] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.236020] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.236024] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462510) 00:14:27.727 [2024-07-15 22:25:41.236030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.727 [2024-07-15 22:25:41.236043] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5380, cid 3, qid 0 00:14:27.727 [2024-07-15 22:25:41.236082] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.727 [2024-07-15 22:25:41.236087] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.727 [2024-07-15 22:25:41.236091] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.236095] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5380) on tqpair=0x2462510 00:14:27.727 [2024-07-15 22:25:41.236103] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.236107] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.236110] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462510) 00:14:27.727 [2024-07-15 22:25:41.236116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.727 [2024-07-15 22:25:41.236128] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5380, cid 3, qid 0 00:14:27.727 [2024-07-15 22:25:41.236165] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.727 [2024-07-15 22:25:41.236170] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.727 [2024-07-15 22:25:41.236174] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.236178] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5380) on tqpair=0x2462510 00:14:27.727 [2024-07-15 22:25:41.236186] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.236190] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.236193] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462510) 00:14:27.727 [2024-07-15 22:25:41.236199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.727 [2024-07-15 22:25:41.236211] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5380, cid 3, qid 0 00:14:27.727 [2024-07-15 22:25:41.236243] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.727 [2024-07-15 22:25:41.236248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.727 [2024-07-15 22:25:41.236252] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.236255] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5380) on tqpair=0x2462510 00:14:27.727 [2024-07-15 22:25:41.236264] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.236268] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.236271] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462510) 00:14:27.727 [2024-07-15 22:25:41.236277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.727 [2024-07-15 22:25:41.236289] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5380, cid 3, qid 0 00:14:27.727 [2024-07-15 22:25:41.236323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.727 [2024-07-15 22:25:41.236329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.727 [2024-07-15 22:25:41.236332] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.236336] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5380) on tqpair=0x2462510 00:14:27.727 [2024-07-15 22:25:41.236344] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.236348] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.236352] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462510) 00:14:27.727 [2024-07-15 22:25:41.236358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.727 [2024-07-15 22:25:41.236370] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5380, cid 3, qid 0 00:14:27.727 [2024-07-15 22:25:41.236408] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.727 [2024-07-15 22:25:41.236414] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.727 [2024-07-15 22:25:41.236418] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.236421] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5380) on tqpair=0x2462510 00:14:27.727 [2024-07-15 22:25:41.236430] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.236433] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.236437] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462510) 00:14:27.727 [2024-07-15 22:25:41.236443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.727 [2024-07-15 22:25:41.236455] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5380, cid 3, qid 0 00:14:27.727 [2024-07-15 22:25:41.236489] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.727 [2024-07-15 22:25:41.236495] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.727 [2024-07-15 22:25:41.236498] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.236502] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5380) on tqpair=0x2462510 00:14:27.727 [2024-07-15 22:25:41.236510] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.236514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.236518] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462510) 00:14:27.727 [2024-07-15 22:25:41.236523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.727 [2024-07-15 22:25:41.236535] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5380, cid 3, qid 0 00:14:27.727 [2024-07-15 22:25:41.236575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.727 [2024-07-15 22:25:41.236581] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.727 [2024-07-15 22:25:41.236584] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.236588] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5380) on tqpair=0x2462510 00:14:27.727 [2024-07-15 22:25:41.240610] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.240621] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.240625] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462510) 00:14:27.727 [2024-07-15 22:25:41.240633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.727 [2024-07-15 22:25:41.240650] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5380, cid 3, qid 0 00:14:27.727 [2024-07-15 22:25:41.240691] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.727 [2024-07-15 22:25:41.240697] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.727 [2024-07-15 22:25:41.240701] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.727 [2024-07-15 22:25:41.240705] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5380) on tqpair=0x2462510 00:14:27.728 [2024-07-15 22:25:41.240713] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:14:27.728 00:14:27.728 22:25:41 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:27.728 [2024-07-15 22:25:41.285808] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:14:27.728 [2024-07-15 22:25:41.285856] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74701 ] 00:14:27.989 [2024-07-15 22:25:41.423005] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:14:27.989 [2024-07-15 22:25:41.423079] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:27.989 [2024-07-15 22:25:41.423084] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:27.989 [2024-07-15 22:25:41.423098] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:27.989 [2024-07-15 22:25:41.423104] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:27.989 [2024-07-15 22:25:41.423371] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:14:27.989 [2024-07-15 22:25:41.423410] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x658510 0 00:14:27.989 [2024-07-15 22:25:41.437614] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:27.989 [2024-07-15 22:25:41.437637] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:27.989 [2024-07-15 22:25:41.437643] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:27.990 [2024-07-15 22:25:41.437647] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:27.990 [2024-07-15 22:25:41.437688] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.990 [2024-07-15 22:25:41.437694] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.990 [2024-07-15 22:25:41.437698] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x658510) 00:14:27.990 [2024-07-15 22:25:41.437711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:27.990 [2024-07-15 22:25:41.437737] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6baf00, cid 0, qid 0 00:14:27.990 [2024-07-15 22:25:41.445612] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.990 [2024-07-15 22:25:41.445629] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.990 [2024-07-15 22:25:41.445633] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.990 [2024-07-15 22:25:41.445638] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6baf00) on tqpair=0x658510 00:14:27.990 [2024-07-15 22:25:41.445648] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:27.990 [2024-07-15 22:25:41.445655] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:14:27.990 [2024-07-15 22:25:41.445661] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:14:27.990 [2024-07-15 22:25:41.445677] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.990 [2024-07-15 22:25:41.445681] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.990 [2024-07-15 22:25:41.445685] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x658510) 00:14:27.990 [2024-07-15 22:25:41.445694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.990 [2024-07-15 22:25:41.445713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6baf00, cid 0, qid 0 00:14:27.990 [2024-07-15 22:25:41.445765] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.990 [2024-07-15 22:25:41.445771] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.990 [2024-07-15 22:25:41.445774] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.990 [2024-07-15 22:25:41.445778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6baf00) on tqpair=0x658510 00:14:27.990 [2024-07-15 22:25:41.445783] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:14:27.990 [2024-07-15 22:25:41.445791] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:14:27.990 [2024-07-15 22:25:41.445797] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.990 [2024-07-15 22:25:41.445801] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.990 [2024-07-15 22:25:41.445805] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x658510) 00:14:27.990 [2024-07-15 22:25:41.445811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.990 [2024-07-15 22:25:41.445824] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6baf00, cid 0, qid 0 00:14:27.990 [2024-07-15 22:25:41.445864] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.990 [2024-07-15 22:25:41.445870] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.990 [2024-07-15 22:25:41.445873] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.990 [2024-07-15 22:25:41.445877] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6baf00) on tqpair=0x658510 00:14:27.990 [2024-07-15 22:25:41.445883] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:14:27.990 [2024-07-15 22:25:41.445891] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:14:27.990 [2024-07-15 22:25:41.445897] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.990 [2024-07-15 22:25:41.445901] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.990 [2024-07-15 22:25:41.445905] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x658510) 00:14:27.990 [2024-07-15 22:25:41.445911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.990 [2024-07-15 22:25:41.445923] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6baf00, cid 0, qid 0 00:14:27.990 [2024-07-15 22:25:41.445964] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.990 [2024-07-15 22:25:41.445969] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.990 [2024-07-15 22:25:41.445973] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.990 [2024-07-15 22:25:41.445977] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6baf00) on tqpair=0x658510 00:14:27.990 [2024-07-15 22:25:41.445981] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:27.990 [2024-07-15 22:25:41.445990] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.990 [2024-07-15 22:25:41.445994] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.990 [2024-07-15 22:25:41.445998] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x658510) 00:14:27.990 [2024-07-15 22:25:41.446003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.990 [2024-07-15 22:25:41.446016] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6baf00, cid 0, qid 0 00:14:27.990 [2024-07-15 22:25:41.446056] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.990 [2024-07-15 22:25:41.446062] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.990 [2024-07-15 22:25:41.446065] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.990 [2024-07-15 22:25:41.446069] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6baf00) on tqpair=0x658510 00:14:27.990 [2024-07-15 22:25:41.446074] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:14:27.990 [2024-07-15 22:25:41.446079] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:14:27.990 [2024-07-15 22:25:41.446086] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:27.990 [2024-07-15 22:25:41.446191] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:14:27.990 [2024-07-15 22:25:41.446195] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:27.990 [2024-07-15 22:25:41.446204] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.990 [2024-07-15 22:25:41.446208] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.990 [2024-07-15 22:25:41.446211] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x658510) 00:14:27.990 [2024-07-15 22:25:41.446217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.990 [2024-07-15 22:25:41.446230] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6baf00, cid 0, qid 0 00:14:27.990 [2024-07-15 22:25:41.446271] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.990 [2024-07-15 22:25:41.446276] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.990 [2024-07-15 22:25:41.446280] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.990 [2024-07-15 22:25:41.446284] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6baf00) on tqpair=0x658510 00:14:27.990 [2024-07-15 22:25:41.446288] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:27.990 [2024-07-15 22:25:41.446297] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.990 [2024-07-15 22:25:41.446301] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.990 [2024-07-15 22:25:41.446305] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x658510) 00:14:27.990 [2024-07-15 22:25:41.446311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.990 [2024-07-15 22:25:41.446323] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6baf00, cid 0, qid 0 00:14:27.990 [2024-07-15 22:25:41.446365] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.990 [2024-07-15 22:25:41.446371] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.990 [2024-07-15 22:25:41.446374] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.990 [2024-07-15 22:25:41.446378] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6baf00) on tqpair=0x658510 00:14:27.990 [2024-07-15 22:25:41.446383] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:27.990 [2024-07-15 22:25:41.446387] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:14:27.990 [2024-07-15 22:25:41.446395] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:14:27.990 [2024-07-15 22:25:41.446404] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:14:27.990 [2024-07-15 22:25:41.446413] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.990 [2024-07-15 22:25:41.446417] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x658510) 00:14:27.990 [2024-07-15 22:25:41.446423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.990 [2024-07-15 22:25:41.446437] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6baf00, cid 0, qid 0 00:14:27.990 [2024-07-15 22:25:41.446513] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:27.990 [2024-07-15 22:25:41.446519] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:27.990 [2024-07-15 22:25:41.446523] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:27.990 [2024-07-15 22:25:41.446527] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x658510): datao=0, datal=4096, cccid=0 00:14:27.990 [2024-07-15 22:25:41.446532] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6baf00) on tqpair(0x658510): expected_datao=0, payload_size=4096 00:14:27.990 [2024-07-15 22:25:41.446537] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.990 [2024-07-15 22:25:41.446544] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.446548] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.446556] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.991 [2024-07-15 22:25:41.446561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.991 [2024-07-15 22:25:41.446565] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.446569] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6baf00) on tqpair=0x658510 00:14:27.991 [2024-07-15 22:25:41.446577] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:14:27.991 [2024-07-15 22:25:41.446582] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:14:27.991 [2024-07-15 22:25:41.446590] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:14:27.991 [2024-07-15 22:25:41.446594] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:14:27.991 [2024-07-15 22:25:41.446608] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:14:27.991 [2024-07-15 22:25:41.446613] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:14:27.991 [2024-07-15 22:25:41.446621] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:14:27.991 [2024-07-15 22:25:41.446628] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.446632] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.446636] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x658510) 00:14:27.991 [2024-07-15 22:25:41.446642] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:27.991 [2024-07-15 22:25:41.446656] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6baf00, cid 0, qid 0 00:14:27.991 [2024-07-15 22:25:41.446694] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.991 [2024-07-15 22:25:41.446700] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.991 [2024-07-15 22:25:41.446703] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.446707] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6baf00) on tqpair=0x658510 00:14:27.991 [2024-07-15 22:25:41.446714] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.446718] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.446721] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x658510) 00:14:27.991 [2024-07-15 22:25:41.446727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.991 [2024-07-15 22:25:41.446733] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.446737] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.446741] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x658510) 00:14:27.991 [2024-07-15 22:25:41.446746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.991 [2024-07-15 22:25:41.446752] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.446756] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.446759] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x658510) 00:14:27.991 [2024-07-15 22:25:41.446764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.991 [2024-07-15 22:25:41.446770] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.446774] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.446777] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x658510) 00:14:27.991 [2024-07-15 22:25:41.446783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.991 [2024-07-15 22:25:41.446787] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:27.991 [2024-07-15 22:25:41.446798] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:27.991 [2024-07-15 22:25:41.446804] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.446808] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x658510) 00:14:27.991 [2024-07-15 22:25:41.446814] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.991 [2024-07-15 22:25:41.446829] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6baf00, cid 0, qid 0 00:14:27.991 [2024-07-15 22:25:41.446834] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb080, cid 1, qid 0 00:14:27.991 [2024-07-15 22:25:41.446838] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb200, cid 2, qid 0 00:14:27.991 [2024-07-15 22:25:41.446843] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb380, cid 3, qid 0 00:14:27.991 [2024-07-15 22:25:41.446847] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb500, cid 4, qid 0 00:14:27.991 [2024-07-15 22:25:41.446914] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.991 [2024-07-15 22:25:41.446920] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.991 [2024-07-15 22:25:41.446924] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.446928] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb500) on tqpair=0x658510 00:14:27.991 [2024-07-15 22:25:41.446932] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:14:27.991 [2024-07-15 22:25:41.446938] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:27.991 [2024-07-15 22:25:41.446946] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:14:27.991 [2024-07-15 22:25:41.446952] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:27.991 [2024-07-15 22:25:41.446958] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.446962] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.446966] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x658510) 00:14:27.991 [2024-07-15 22:25:41.446972] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:27.991 [2024-07-15 22:25:41.446984] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb500, cid 4, qid 0 00:14:27.991 [2024-07-15 22:25:41.447029] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.991 [2024-07-15 22:25:41.447034] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.991 [2024-07-15 22:25:41.447038] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.447042] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb500) on tqpair=0x658510 00:14:27.991 [2024-07-15 22:25:41.447094] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:14:27.991 [2024-07-15 22:25:41.447102] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:27.991 [2024-07-15 22:25:41.447110] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.447114] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x658510) 00:14:27.991 [2024-07-15 22:25:41.447120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.991 [2024-07-15 22:25:41.447133] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb500, cid 4, qid 0 00:14:27.991 [2024-07-15 22:25:41.447184] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:27.991 [2024-07-15 22:25:41.447189] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:27.991 [2024-07-15 22:25:41.447193] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.447197] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x658510): datao=0, datal=4096, cccid=4 00:14:27.991 [2024-07-15 22:25:41.447201] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bb500) on tqpair(0x658510): expected_datao=0, payload_size=4096 00:14:27.991 [2024-07-15 22:25:41.447206] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.447212] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.447216] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.447223] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.991 [2024-07-15 22:25:41.447229] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.991 [2024-07-15 22:25:41.447232] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.447236] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb500) on tqpair=0x658510 00:14:27.991 [2024-07-15 22:25:41.447245] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:14:27.991 [2024-07-15 22:25:41.447255] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:14:27.991 [2024-07-15 22:25:41.447263] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:14:27.991 [2024-07-15 22:25:41.447270] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.447274] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x658510) 00:14:27.991 [2024-07-15 22:25:41.447280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.991 [2024-07-15 22:25:41.447293] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb500, cid 4, qid 0 00:14:27.991 [2024-07-15 22:25:41.447356] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:27.991 [2024-07-15 22:25:41.447361] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:27.991 [2024-07-15 22:25:41.447365] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:27.991 [2024-07-15 22:25:41.447369] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x658510): datao=0, datal=4096, cccid=4 00:14:27.992 [2024-07-15 22:25:41.447373] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bb500) on tqpair(0x658510): expected_datao=0, payload_size=4096 00:14:27.992 [2024-07-15 22:25:41.447378] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.992 [2024-07-15 22:25:41.447384] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:27.992 [2024-07-15 22:25:41.447387] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:27.992 [2024-07-15 22:25:41.447395] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.992 [2024-07-15 22:25:41.447400] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.992 [2024-07-15 22:25:41.447404] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.992 [2024-07-15 22:25:41.447408] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb500) on tqpair=0x658510 00:14:27.992 [2024-07-15 22:25:41.447420] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:27.992 [2024-07-15 22:25:41.447429] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:27.992 [2024-07-15 22:25:41.447435] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.992 [2024-07-15 22:25:41.447439] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x658510) 00:14:27.992 [2024-07-15 22:25:41.447445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.992 [2024-07-15 22:25:41.447458] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb500, cid 4, qid 0 00:14:27.992 [2024-07-15 22:25:41.447503] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:27.992 [2024-07-15 22:25:41.447509] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:27.992 [2024-07-15 22:25:41.447512] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:27.992 [2024-07-15 22:25:41.447516] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x658510): datao=0, datal=4096, cccid=4 00:14:27.992 [2024-07-15 22:25:41.447521] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bb500) on tqpair(0x658510): expected_datao=0, payload_size=4096 00:14:27.992 [2024-07-15 22:25:41.447525] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.992 [2024-07-15 22:25:41.447531] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:27.992 [2024-07-15 22:25:41.447535] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:27.992 [2024-07-15 22:25:41.447542] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.992 [2024-07-15 22:25:41.447547] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.992 [2024-07-15 22:25:41.447551] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.992 [2024-07-15 22:25:41.447555] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb500) on tqpair=0x658510 00:14:27.992 [2024-07-15 22:25:41.447561] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:27.992 [2024-07-15 22:25:41.447568] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:14:27.992 [2024-07-15 22:25:41.447577] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:14:27.992 [2024-07-15 22:25:41.447584] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:27.992 [2024-07-15 22:25:41.447589] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:27.992 [2024-07-15 22:25:41.447594] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:14:27.992 [2024-07-15 22:25:41.447608] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:14:27.992 [2024-07-15 22:25:41.447613] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:14:27.992 [2024-07-15 22:25:41.447618] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:14:27.992 [2024-07-15 22:25:41.447636] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.992 [2024-07-15 22:25:41.447641] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x658510) 00:14:27.992 [2024-07-15 22:25:41.447647] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.992 [2024-07-15 22:25:41.447653] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.992 [2024-07-15 22:25:41.447657] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.992 [2024-07-15 22:25:41.447661] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x658510) 00:14:27.992 [2024-07-15 22:25:41.447666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.992 [2024-07-15 22:25:41.447684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb500, cid 4, qid 0 00:14:27.992 [2024-07-15 22:25:41.447689] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb680, cid 5, qid 0 00:14:27.992 [2024-07-15 22:25:41.447742] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.992 [2024-07-15 22:25:41.447748] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.992 [2024-07-15 22:25:41.447752] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.992 [2024-07-15 22:25:41.447756] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb500) on tqpair=0x658510 00:14:27.992 [2024-07-15 22:25:41.447762] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.992 [2024-07-15 22:25:41.447767] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.992 [2024-07-15 22:25:41.447771] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.992 [2024-07-15 22:25:41.447775] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb680) on tqpair=0x658510 00:14:27.992 [2024-07-15 22:25:41.447784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.992 [2024-07-15 22:25:41.447788] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x658510) 00:14:27.992 [2024-07-15 22:25:41.447793] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.992 [2024-07-15 22:25:41.447806] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb680, cid 5, qid 0 00:14:27.992 [2024-07-15 22:25:41.447847] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.992 [2024-07-15 22:25:41.447853] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.992 [2024-07-15 22:25:41.447856] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.992 [2024-07-15 22:25:41.447860] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb680) on tqpair=0x658510 00:14:27.992 [2024-07-15 22:25:41.447869] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.992 [2024-07-15 22:25:41.447873] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x658510) 00:14:27.992 [2024-07-15 22:25:41.447879] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.992 [2024-07-15 22:25:41.447891] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb680, cid 5, qid 0 00:14:27.992 [2024-07-15 22:25:41.447928] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.992 [2024-07-15 22:25:41.447934] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.992 [2024-07-15 22:25:41.447937] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.992 [2024-07-15 22:25:41.447941] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb680) on tqpair=0x658510 00:14:27.992 [2024-07-15 22:25:41.447950] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.992 [2024-07-15 22:25:41.447954] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x658510) 00:14:27.992 [2024-07-15 22:25:41.447960] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.992 [2024-07-15 22:25:41.447972] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb680, cid 5, qid 0 00:14:27.992 [2024-07-15 22:25:41.448014] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.992 [2024-07-15 22:25:41.448019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.992 [2024-07-15 22:25:41.448023] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.992 [2024-07-15 22:25:41.448027] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb680) on tqpair=0x658510 00:14:27.992 [2024-07-15 22:25:41.448042] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.992 [2024-07-15 22:25:41.448046] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x658510) 00:14:27.992 [2024-07-15 22:25:41.448052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.992 [2024-07-15 22:25:41.448059] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.992 [2024-07-15 22:25:41.448063] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x658510) 00:14:27.992 [2024-07-15 22:25:41.448068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.992 [2024-07-15 22:25:41.448075] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.992 [2024-07-15 22:25:41.448079] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x658510) 00:14:27.992 [2024-07-15 22:25:41.448084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.992 [2024-07-15 22:25:41.448091] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.992 [2024-07-15 22:25:41.448095] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x658510) 00:14:27.992 [2024-07-15 22:25:41.448101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.992 [2024-07-15 22:25:41.448115] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb680, cid 5, qid 0 00:14:27.992 [2024-07-15 22:25:41.448120] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb500, cid 4, qid 0 00:14:27.992 [2024-07-15 22:25:41.448124] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb800, cid 6, qid 0 00:14:27.992 [2024-07-15 22:25:41.448129] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb980, cid 7, qid 0 00:14:27.992 [2024-07-15 22:25:41.448235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:27.993 [2024-07-15 22:25:41.448241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:27.993 [2024-07-15 22:25:41.448244] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:27.993 [2024-07-15 22:25:41.448248] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x658510): datao=0, datal=8192, cccid=5 00:14:27.993 [2024-07-15 22:25:41.448253] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bb680) on tqpair(0x658510): expected_datao=0, payload_size=8192 00:14:27.993 [2024-07-15 22:25:41.448257] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.993 [2024-07-15 22:25:41.448271] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:27.993 [2024-07-15 22:25:41.448275] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:27.993 [2024-07-15 22:25:41.448281] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:27.993 [2024-07-15 22:25:41.448286] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:27.993 [2024-07-15 22:25:41.448290] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:27.993 [2024-07-15 22:25:41.448293] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x658510): datao=0, datal=512, cccid=4 00:14:27.993 [2024-07-15 22:25:41.448298] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bb500) on tqpair(0x658510): expected_datao=0, payload_size=512 00:14:27.993 [2024-07-15 22:25:41.448302] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.993 [2024-07-15 22:25:41.448308] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:27.993 [2024-07-15 22:25:41.448312] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:27.993 [2024-07-15 22:25:41.448317] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:27.993 [2024-07-15 22:25:41.448322] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:27.993 [2024-07-15 22:25:41.448326] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:27.993 [2024-07-15 22:25:41.448330] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x658510): datao=0, datal=512, cccid=6 00:14:27.993 [2024-07-15 22:25:41.448334] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bb800) on tqpair(0x658510): expected_datao=0, payload_size=512 00:14:27.993 [2024-07-15 22:25:41.448338] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.993 [2024-07-15 22:25:41.448344] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:27.993 [2024-07-15 22:25:41.448348] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:27.993 [2024-07-15 22:25:41.448353] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:27.993 [2024-07-15 22:25:41.448358] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:27.993 [2024-07-15 22:25:41.448362] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:27.993 [2024-07-15 22:25:41.448365] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x658510): datao=0, datal=4096, cccid=7 00:14:27.993 [2024-07-15 22:25:41.448370] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bb980) on tqpair(0x658510): expected_datao=0, payload_size=4096 00:14:27.993 [2024-07-15 22:25:41.448375] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.993 [2024-07-15 22:25:41.448381] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:27.993 [2024-07-15 22:25:41.448384] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:27.993 [2024-07-15 22:25:41.448392] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.993 [2024-07-15 22:25:41.448397] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.993 [2024-07-15 22:25:41.448400] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.993 [2024-07-15 22:25:41.448404] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb680) on tqpair=0x658510 00:14:27.993 ===================================================== 00:14:27.993 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:27.993 ===================================================== 00:14:27.993 Controller Capabilities/Features 00:14:27.993 ================================ 00:14:27.993 Vendor ID: 8086 00:14:27.993 Subsystem Vendor ID: 8086 00:14:27.993 Serial Number: SPDK00000000000001 00:14:27.993 Model Number: SPDK bdev Controller 00:14:27.993 Firmware Version: 24.09 00:14:27.993 Recommended Arb Burst: 6 00:14:27.993 IEEE OUI Identifier: e4 d2 5c 00:14:27.993 Multi-path I/O 00:14:27.993 May have multiple subsystem ports: Yes 00:14:27.993 May have multiple controllers: Yes 00:14:27.993 Associated with SR-IOV VF: No 00:14:27.993 Max Data Transfer Size: 131072 00:14:27.993 Max Number of Namespaces: 32 00:14:27.993 Max Number of I/O Queues: 127 00:14:27.993 NVMe Specification Version (VS): 1.3 00:14:27.993 NVMe Specification Version (Identify): 1.3 00:14:27.993 Maximum Queue Entries: 128 00:14:27.993 Contiguous Queues Required: Yes 00:14:27.993 Arbitration Mechanisms Supported 00:14:27.993 Weighted Round Robin: Not Supported 00:14:27.993 Vendor Specific: Not Supported 00:14:27.993 Reset Timeout: 15000 ms 00:14:27.993 Doorbell Stride: 4 bytes 00:14:27.993 NVM Subsystem Reset: Not Supported 00:14:27.993 Command Sets Supported 00:14:27.993 NVM Command Set: Supported 00:14:27.993 Boot Partition: Not Supported 00:14:27.993 Memory Page Size Minimum: 4096 bytes 00:14:27.993 Memory Page Size Maximum: 4096 bytes 00:14:27.993 Persistent Memory Region: Not Supported 00:14:27.993 Optional Asynchronous Events Supported 00:14:27.993 Namespace Attribute Notices: Supported 00:14:27.993 Firmware Activation Notices: Not Supported 00:14:27.993 ANA Change Notices: Not Supported 00:14:27.993 PLE Aggregate Log Change Notices: Not Supported 00:14:27.993 LBA Status Info Alert Notices: Not Supported 00:14:27.993 EGE Aggregate Log Change Notices: Not Supported 00:14:27.993 Normal NVM Subsystem Shutdown event: Not Supported 00:14:27.993 Zone Descriptor Change Notices: Not Supported 00:14:27.993 Discovery Log Change Notices: Not Supported 00:14:27.993 Controller Attributes 00:14:27.993 128-bit Host Identifier: Supported 00:14:27.993 Non-Operational Permissive Mode: Not Supported 00:14:27.993 NVM Sets: Not Supported 00:14:27.993 Read Recovery Levels: Not Supported 00:14:27.993 Endurance Groups: Not Supported 00:14:27.993 Predictable Latency Mode: Not Supported 00:14:27.993 Traffic Based Keep ALive: Not Supported 00:14:27.993 Namespace Granularity: Not Supported 00:14:27.993 SQ Associations: Not Supported 00:14:27.993 UUID List: Not Supported 00:14:27.993 Multi-Domain Subsystem: Not Supported 00:14:27.993 Fixed Capacity Management: Not Supported 00:14:27.993 Variable Capacity Management: Not Supported 00:14:27.993 Delete Endurance Group: Not Supported 00:14:27.993 Delete NVM Set: Not Supported 00:14:27.993 Extended LBA Formats Supported: Not Supported 00:14:27.993 Flexible Data Placement Supported: Not Supported 00:14:27.993 00:14:27.993 Controller Memory Buffer Support 00:14:27.993 ================================ 00:14:27.993 Supported: No 00:14:27.993 00:14:27.993 Persistent Memory Region Support 00:14:27.993 ================================ 00:14:27.993 Supported: No 00:14:27.993 00:14:27.993 Admin Command Set Attributes 00:14:27.993 ============================ 00:14:27.993 Security Send/Receive: Not Supported 00:14:27.993 Format NVM: Not Supported 00:14:27.993 Firmware Activate/Download: Not Supported 00:14:27.993 Namespace Management: Not Supported 00:14:27.993 Device Self-Test: Not Supported 00:14:27.993 Directives: Not Supported 00:14:27.993 NVMe-MI: Not Supported 00:14:27.993 Virtualization Management: Not Supported 00:14:27.993 Doorbell Buffer Config: Not Supported 00:14:27.993 Get LBA Status Capability: Not Supported 00:14:27.993 Command & Feature Lockdown Capability: Not Supported 00:14:27.993 Abort Command Limit: 4 00:14:27.993 Async Event Request Limit: 4 00:14:27.993 Number of Firmware Slots: N/A 00:14:27.993 Firmware Slot 1 Read-Only: N/A 00:14:27.993 Firmware Activation Without Reset: [2024-07-15 22:25:41.448417] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.993 [2024-07-15 22:25:41.448423] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.993 [2024-07-15 22:25:41.448427] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.993 [2024-07-15 22:25:41.448431] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb500) on tqpair=0x658510 00:14:27.993 [2024-07-15 22:25:41.448444] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.993 [2024-07-15 22:25:41.448450] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.993 [2024-07-15 22:25:41.448454] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.993 [2024-07-15 22:25:41.448457] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb800) on tqpair=0x658510 00:14:27.994 [2024-07-15 22:25:41.448464] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.994 [2024-07-15 22:25:41.448469] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.994 [2024-07-15 22:25:41.448473] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.994 [2024-07-15 22:25:41.448477] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb980) on tqpair=0x658510 00:14:27.994 N/A 00:14:27.994 Multiple Update Detection Support: N/A 00:14:27.994 Firmware Update Granularity: No Information Provided 00:14:27.994 Per-Namespace SMART Log: No 00:14:27.994 Asymmetric Namespace Access Log Page: Not Supported 00:14:27.994 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:27.994 Command Effects Log Page: Supported 00:14:27.994 Get Log Page Extended Data: Supported 00:14:27.994 Telemetry Log Pages: Not Supported 00:14:27.994 Persistent Event Log Pages: Not Supported 00:14:27.994 Supported Log Pages Log Page: May Support 00:14:27.994 Commands Supported & Effects Log Page: Not Supported 00:14:27.994 Feature Identifiers & Effects Log Page:May Support 00:14:27.994 NVMe-MI Commands & Effects Log Page: May Support 00:14:27.994 Data Area 4 for Telemetry Log: Not Supported 00:14:27.994 Error Log Page Entries Supported: 128 00:14:27.994 Keep Alive: Supported 00:14:27.994 Keep Alive Granularity: 10000 ms 00:14:27.994 00:14:27.994 NVM Command Set Attributes 00:14:27.994 ========================== 00:14:27.994 Submission Queue Entry Size 00:14:27.994 Max: 64 00:14:27.994 Min: 64 00:14:27.994 Completion Queue Entry Size 00:14:27.994 Max: 16 00:14:27.994 Min: 16 00:14:27.994 Number of Namespaces: 32 00:14:27.994 Compare Command: Supported 00:14:27.994 Write Uncorrectable Command: Not Supported 00:14:27.994 Dataset Management Command: Supported 00:14:27.994 Write Zeroes Command: Supported 00:14:27.994 Set Features Save Field: Not Supported 00:14:27.994 Reservations: Supported 00:14:27.994 Timestamp: Not Supported 00:14:27.994 Copy: Supported 00:14:27.994 Volatile Write Cache: Present 00:14:27.994 Atomic Write Unit (Normal): 1 00:14:27.994 Atomic Write Unit (PFail): 1 00:14:27.994 Atomic Compare & Write Unit: 1 00:14:27.994 Fused Compare & Write: Supported 00:14:27.994 Scatter-Gather List 00:14:27.994 SGL Command Set: Supported 00:14:27.994 SGL Keyed: Supported 00:14:27.994 SGL Bit Bucket Descriptor: Not Supported 00:14:27.994 SGL Metadata Pointer: Not Supported 00:14:27.994 Oversized SGL: Not Supported 00:14:27.994 SGL Metadata Address: Not Supported 00:14:27.994 SGL Offset: Supported 00:14:27.994 Transport SGL Data Block: Not Supported 00:14:27.994 Replay Protected Memory Block: Not Supported 00:14:27.994 00:14:27.994 Firmware Slot Information 00:14:27.994 ========================= 00:14:27.994 Active slot: 1 00:14:27.994 Slot 1 Firmware Revision: 24.09 00:14:27.994 00:14:27.994 00:14:27.994 Commands Supported and Effects 00:14:27.994 ============================== 00:14:27.994 Admin Commands 00:14:27.994 -------------- 00:14:27.994 Get Log Page (02h): Supported 00:14:27.994 Identify (06h): Supported 00:14:27.994 Abort (08h): Supported 00:14:27.994 Set Features (09h): Supported 00:14:27.994 Get Features (0Ah): Supported 00:14:27.994 Asynchronous Event Request (0Ch): Supported 00:14:27.994 Keep Alive (18h): Supported 00:14:27.994 I/O Commands 00:14:27.994 ------------ 00:14:27.994 Flush (00h): Supported LBA-Change 00:14:27.994 Write (01h): Supported LBA-Change 00:14:27.994 Read (02h): Supported 00:14:27.994 Compare (05h): Supported 00:14:27.994 Write Zeroes (08h): Supported LBA-Change 00:14:27.994 Dataset Management (09h): Supported LBA-Change 00:14:27.994 Copy (19h): Supported LBA-Change 00:14:27.994 00:14:27.994 Error Log 00:14:27.994 ========= 00:14:27.994 00:14:27.994 Arbitration 00:14:27.994 =========== 00:14:27.994 Arbitration Burst: 1 00:14:27.994 00:14:27.994 Power Management 00:14:27.994 ================ 00:14:27.994 Number of Power States: 1 00:14:27.994 Current Power State: Power State #0 00:14:27.994 Power State #0: 00:14:27.994 Max Power: 0.00 W 00:14:27.994 Non-Operational State: Operational 00:14:27.994 Entry Latency: Not Reported 00:14:27.994 Exit Latency: Not Reported 00:14:27.994 Relative Read Throughput: 0 00:14:27.994 Relative Read Latency: 0 00:14:27.994 Relative Write Throughput: 0 00:14:27.994 Relative Write Latency: 0 00:14:27.994 Idle Power: Not Reported 00:14:27.994 Active Power: Not Reported 00:14:27.994 Non-Operational Permissive Mode: Not Supported 00:14:27.994 00:14:27.994 Health Information 00:14:27.994 ================== 00:14:27.994 Critical Warnings: 00:14:27.994 Available Spare Space: OK 00:14:27.994 Temperature: OK 00:14:27.994 Device Reliability: OK 00:14:27.994 Read Only: No 00:14:27.994 Volatile Memory Backup: OK 00:14:27.994 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:27.994 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:27.994 Available Spare: 0% 00:14:27.994 Available Spare Threshold: 0% 00:14:27.994 Life Percentage Used:[2024-07-15 22:25:41.448568] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.994 [2024-07-15 22:25:41.448573] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x658510) 00:14:27.994 [2024-07-15 22:25:41.448580] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.994 [2024-07-15 22:25:41.448594] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb980, cid 7, qid 0 00:14:27.994 [2024-07-15 22:25:41.448644] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.994 [2024-07-15 22:25:41.448650] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.994 [2024-07-15 22:25:41.448654] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.994 [2024-07-15 22:25:41.448657] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb980) on tqpair=0x658510 00:14:27.994 [2024-07-15 22:25:41.448693] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:14:27.994 [2024-07-15 22:25:41.448701] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6baf00) on tqpair=0x658510 00:14:27.994 [2024-07-15 22:25:41.448707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.994 [2024-07-15 22:25:41.448713] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb080) on tqpair=0x658510 00:14:27.994 [2024-07-15 22:25:41.448717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.994 [2024-07-15 22:25:41.448722] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb200) on tqpair=0x658510 00:14:27.994 [2024-07-15 22:25:41.448727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.994 [2024-07-15 22:25:41.448732] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb380) on tqpair=0x658510 00:14:27.994 [2024-07-15 22:25:41.448736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.994 [2024-07-15 22:25:41.448744] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.448748] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.448752] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x658510) 00:14:27.995 [2024-07-15 22:25:41.448758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.995 [2024-07-15 22:25:41.448773] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb380, cid 3, qid 0 00:14:27.995 [2024-07-15 22:25:41.448815] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.995 [2024-07-15 22:25:41.448821] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.995 [2024-07-15 22:25:41.448825] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.448829] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb380) on tqpair=0x658510 00:14:27.995 [2024-07-15 22:25:41.448835] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.448839] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.448843] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x658510) 00:14:27.995 [2024-07-15 22:25:41.448849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.995 [2024-07-15 22:25:41.448863] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb380, cid 3, qid 0 00:14:27.995 [2024-07-15 22:25:41.448918] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.995 [2024-07-15 22:25:41.448923] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.995 [2024-07-15 22:25:41.448927] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.448931] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb380) on tqpair=0x658510 00:14:27.995 [2024-07-15 22:25:41.448935] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:14:27.995 [2024-07-15 22:25:41.448940] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:14:27.995 [2024-07-15 22:25:41.448948] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.448952] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.448956] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x658510) 00:14:27.995 [2024-07-15 22:25:41.448962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.995 [2024-07-15 22:25:41.448974] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb380, cid 3, qid 0 00:14:27.995 [2024-07-15 22:25:41.449016] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.995 [2024-07-15 22:25:41.449022] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.995 [2024-07-15 22:25:41.449025] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.449029] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb380) on tqpair=0x658510 00:14:27.995 [2024-07-15 22:25:41.449038] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.449042] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.449045] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x658510) 00:14:27.995 [2024-07-15 22:25:41.449052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.995 [2024-07-15 22:25:41.449064] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb380, cid 3, qid 0 00:14:27.995 [2024-07-15 22:25:41.449106] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.995 [2024-07-15 22:25:41.449111] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.995 [2024-07-15 22:25:41.449115] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.449119] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb380) on tqpair=0x658510 00:14:27.995 [2024-07-15 22:25:41.449127] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.449131] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.449135] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x658510) 00:14:27.995 [2024-07-15 22:25:41.449141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.995 [2024-07-15 22:25:41.449153] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb380, cid 3, qid 0 00:14:27.995 [2024-07-15 22:25:41.449192] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.995 [2024-07-15 22:25:41.449198] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.995 [2024-07-15 22:25:41.449201] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.449205] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb380) on tqpair=0x658510 00:14:27.995 [2024-07-15 22:25:41.449213] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.449217] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.449221] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x658510) 00:14:27.995 [2024-07-15 22:25:41.449227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.995 [2024-07-15 22:25:41.449239] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb380, cid 3, qid 0 00:14:27.995 [2024-07-15 22:25:41.449283] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.995 [2024-07-15 22:25:41.449289] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.995 [2024-07-15 22:25:41.449293] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.449296] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb380) on tqpair=0x658510 00:14:27.995 [2024-07-15 22:25:41.449305] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.449309] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.449312] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x658510) 00:14:27.995 [2024-07-15 22:25:41.449318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.995 [2024-07-15 22:25:41.449340] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb380, cid 3, qid 0 00:14:27.995 [2024-07-15 22:25:41.449382] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.995 [2024-07-15 22:25:41.449388] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.995 [2024-07-15 22:25:41.449391] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.449395] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb380) on tqpair=0x658510 00:14:27.995 [2024-07-15 22:25:41.449403] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.449407] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.449411] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x658510) 00:14:27.995 [2024-07-15 22:25:41.449417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.995 [2024-07-15 22:25:41.449429] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb380, cid 3, qid 0 00:14:27.995 [2024-07-15 22:25:41.449467] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.995 [2024-07-15 22:25:41.449473] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.995 [2024-07-15 22:25:41.449477] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.449480] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb380) on tqpair=0x658510 00:14:27.995 [2024-07-15 22:25:41.449489] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.449493] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.449496] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x658510) 00:14:27.995 [2024-07-15 22:25:41.449502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.995 [2024-07-15 22:25:41.449515] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb380, cid 3, qid 0 00:14:27.995 [2024-07-15 22:25:41.449551] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.995 [2024-07-15 22:25:41.449557] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.995 [2024-07-15 22:25:41.449560] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.449564] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb380) on tqpair=0x658510 00:14:27.995 [2024-07-15 22:25:41.449572] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.449576] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.995 [2024-07-15 22:25:41.449580] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x658510) 00:14:27.995 [2024-07-15 22:25:41.449586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.995 [2024-07-15 22:25:41.453608] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb380, cid 3, qid 0 00:14:27.995 [2024-07-15 22:25:41.453624] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.995 [2024-07-15 22:25:41.453630] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.996 [2024-07-15 22:25:41.453634] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.996 [2024-07-15 22:25:41.453638] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb380) on tqpair=0x658510 00:14:27.996 [2024-07-15 22:25:41.453649] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.996 [2024-07-15 22:25:41.453654] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.996 [2024-07-15 22:25:41.453657] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x658510) 00:14:27.996 [2024-07-15 22:25:41.453664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.996 [2024-07-15 22:25:41.453682] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bb380, cid 3, qid 0 00:14:27.996 [2024-07-15 22:25:41.453725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.996 [2024-07-15 22:25:41.453731] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.996 [2024-07-15 22:25:41.453734] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.996 [2024-07-15 22:25:41.453738] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6bb380) on tqpair=0x658510 00:14:27.996 [2024-07-15 22:25:41.453745] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:14:27.996 0% 00:14:27.996 Data Units Read: 0 00:14:27.996 Data Units Written: 0 00:14:27.996 Host Read Commands: 0 00:14:27.996 Host Write Commands: 0 00:14:27.996 Controller Busy Time: 0 minutes 00:14:27.996 Power Cycles: 0 00:14:27.996 Power On Hours: 0 hours 00:14:27.996 Unsafe Shutdowns: 0 00:14:27.996 Unrecoverable Media Errors: 0 00:14:27.996 Lifetime Error Log Entries: 0 00:14:27.996 Warning Temperature Time: 0 minutes 00:14:27.996 Critical Temperature Time: 0 minutes 00:14:27.996 00:14:27.996 Number of Queues 00:14:27.996 ================ 00:14:27.996 Number of I/O Submission Queues: 127 00:14:27.996 Number of I/O Completion Queues: 127 00:14:27.996 00:14:27.996 Active Namespaces 00:14:27.996 ================= 00:14:27.996 Namespace ID:1 00:14:27.996 Error Recovery Timeout: Unlimited 00:14:27.996 Command Set Identifier: NVM (00h) 00:14:27.996 Deallocate: Supported 00:14:27.996 Deallocated/Unwritten Error: Not Supported 00:14:27.996 Deallocated Read Value: Unknown 00:14:27.996 Deallocate in Write Zeroes: Not Supported 00:14:27.996 Deallocated Guard Field: 0xFFFF 00:14:27.996 Flush: Supported 00:14:27.996 Reservation: Supported 00:14:27.996 Namespace Sharing Capabilities: Multiple Controllers 00:14:27.996 Size (in LBAs): 131072 (0GiB) 00:14:27.996 Capacity (in LBAs): 131072 (0GiB) 00:14:27.996 Utilization (in LBAs): 131072 (0GiB) 00:14:27.996 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:27.996 EUI64: ABCDEF0123456789 00:14:27.996 UUID: 8a82c517-4c9b-4e8f-aff5-726171260917 00:14:27.996 Thin Provisioning: Not Supported 00:14:27.996 Per-NS Atomic Units: Yes 00:14:27.996 Atomic Boundary Size (Normal): 0 00:14:27.996 Atomic Boundary Size (PFail): 0 00:14:27.996 Atomic Boundary Offset: 0 00:14:27.996 Maximum Single Source Range Length: 65535 00:14:27.996 Maximum Copy Length: 65535 00:14:27.996 Maximum Source Range Count: 1 00:14:27.996 NGUID/EUI64 Never Reused: No 00:14:27.996 Namespace Write Protected: No 00:14:27.996 Number of LBA Formats: 1 00:14:27.996 Current LBA Format: LBA Format #00 00:14:27.996 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:27.996 00:14:27.996 22:25:41 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:14:27.996 22:25:41 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:27.996 22:25:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.996 22:25:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:27.996 22:25:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.996 22:25:41 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:27.996 22:25:41 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:14:27.996 22:25:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:27.996 22:25:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:14:27.996 22:25:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:27.996 22:25:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:14:27.996 22:25:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:27.996 22:25:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:27.996 rmmod nvme_tcp 00:14:27.996 rmmod nvme_fabrics 00:14:27.996 rmmod nvme_keyring 00:14:27.996 22:25:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:27.996 22:25:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:14:27.996 22:25:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:14:27.996 22:25:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 74664 ']' 00:14:27.996 22:25:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 74664 00:14:27.996 22:25:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 74664 ']' 00:14:27.996 22:25:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 74664 00:14:27.996 22:25:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:14:27.996 22:25:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:27.996 22:25:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74664 00:14:28.256 killing process with pid 74664 00:14:28.256 22:25:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:28.256 22:25:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:28.256 22:25:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74664' 00:14:28.256 22:25:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 74664 00:14:28.256 22:25:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 74664 00:14:28.256 22:25:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:28.256 22:25:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:28.256 22:25:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:28.256 22:25:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:28.256 22:25:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:28.256 22:25:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.256 22:25:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.256 22:25:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.515 22:25:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:28.515 00:14:28.515 real 0m2.543s 00:14:28.515 user 0m6.722s 00:14:28.515 sys 0m0.747s 00:14:28.515 22:25:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:28.515 ************************************ 00:14:28.515 END TEST nvmf_identify 00:14:28.515 ************************************ 00:14:28.515 22:25:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:28.515 22:25:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:28.515 22:25:41 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:28.515 22:25:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:28.515 22:25:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:28.515 22:25:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:28.515 ************************************ 00:14:28.515 START TEST nvmf_perf 00:14:28.515 ************************************ 00:14:28.515 22:25:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:28.515 * Looking for test storage... 00:14:28.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:28.515 22:25:42 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:28.515 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:14:28.515 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.515 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.515 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.515 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.516 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:28.775 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:28.776 Cannot find device "nvmf_tgt_br" 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:28.776 Cannot find device "nvmf_tgt_br2" 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:28.776 Cannot find device "nvmf_tgt_br" 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:28.776 Cannot find device "nvmf_tgt_br2" 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:28.776 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:28.776 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:28.776 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:29.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:14:29.034 00:14:29.034 --- 10.0.0.2 ping statistics --- 00:14:29.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.034 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:29.034 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:29.034 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:14:29.034 00:14:29.034 --- 10.0.0.3 ping statistics --- 00:14:29.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.034 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:29.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:14:29.034 00:14:29.034 --- 10.0.0.1 ping statistics --- 00:14:29.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.034 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=74871 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 74871 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 74871 ']' 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:29.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:29.034 22:25:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:29.293 [2024-07-15 22:25:42.673170] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:14:29.293 [2024-07-15 22:25:42.673240] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.293 [2024-07-15 22:25:42.816838] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:29.293 [2024-07-15 22:25:42.905275] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.293 [2024-07-15 22:25:42.905333] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.293 [2024-07-15 22:25:42.905343] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.293 [2024-07-15 22:25:42.905351] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.293 [2024-07-15 22:25:42.905357] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.293 [2024-07-15 22:25:42.905549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.293 [2024-07-15 22:25:42.905802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.293 [2024-07-15 22:25:42.906403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.293 [2024-07-15 22:25:42.906403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:29.552 [2024-07-15 22:25:42.947784] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:30.117 22:25:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:30.117 22:25:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:14:30.117 22:25:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:30.117 22:25:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:30.117 22:25:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:30.117 22:25:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.117 22:25:43 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:30.117 22:25:43 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:30.414 22:25:43 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:30.414 22:25:43 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:30.671 22:25:44 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:14:30.671 22:25:44 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:30.671 22:25:44 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:30.671 22:25:44 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:14:30.671 22:25:44 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:30.671 22:25:44 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:30.671 22:25:44 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:30.927 [2024-07-15 22:25:44.526748] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.927 22:25:44 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:31.185 22:25:44 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:31.185 22:25:44 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:31.443 22:25:44 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:31.443 22:25:44 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:31.700 22:25:45 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:31.958 [2024-07-15 22:25:45.334556] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.958 22:25:45 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:31.958 22:25:45 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:14:31.958 22:25:45 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:31.958 22:25:45 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:31.958 22:25:45 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:33.330 Initializing NVMe Controllers 00:14:33.330 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:33.330 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:33.330 Initialization complete. Launching workers. 00:14:33.330 ======================================================== 00:14:33.330 Latency(us) 00:14:33.330 Device Information : IOPS MiB/s Average min max 00:14:33.330 PCIE (0000:00:10.0) NSID 1 from core 0: 19655.20 76.78 1629.00 477.90 8626.03 00:14:33.330 ======================================================== 00:14:33.330 Total : 19655.20 76.78 1629.00 477.90 8626.03 00:14:33.330 00:14:33.330 22:25:46 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:34.283 Initializing NVMe Controllers 00:14:34.283 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:34.283 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:34.283 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:34.283 Initialization complete. Launching workers. 00:14:34.283 ======================================================== 00:14:34.283 Latency(us) 00:14:34.283 Device Information : IOPS MiB/s Average min max 00:14:34.283 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5162.93 20.17 192.70 74.65 7121.92 00:14:34.283 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8125.66 4997.25 12026.97 00:14:34.283 ======================================================== 00:14:34.283 Total : 5286.93 20.65 378.76 74.65 12026.97 00:14:34.283 00:14:34.542 22:25:48 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:35.916 Initializing NVMe Controllers 00:14:35.916 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:35.916 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:35.916 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:35.916 Initialization complete. Launching workers. 00:14:35.916 ======================================================== 00:14:35.916 Latency(us) 00:14:35.917 Device Information : IOPS MiB/s Average min max 00:14:35.917 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11347.38 44.33 2820.21 552.49 6249.92 00:14:35.917 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4016.59 15.69 8004.10 6381.49 9804.51 00:14:35.917 ======================================================== 00:14:35.917 Total : 15363.97 60.02 4175.43 552.49 9804.51 00:14:35.917 00:14:35.917 22:25:49 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:35.917 22:25:49 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:38.446 Initializing NVMe Controllers 00:14:38.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:38.447 Controller IO queue size 128, less than required. 00:14:38.447 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:38.447 Controller IO queue size 128, less than required. 00:14:38.447 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:38.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:38.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:38.447 Initialization complete. Launching workers. 00:14:38.447 ======================================================== 00:14:38.447 Latency(us) 00:14:38.447 Device Information : IOPS MiB/s Average min max 00:14:38.447 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2326.42 581.60 55350.42 25726.50 101463.69 00:14:38.447 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 693.33 173.33 191272.09 60743.80 311271.67 00:14:38.447 ======================================================== 00:14:38.447 Total : 3019.74 754.94 86557.73 25726.50 311271.67 00:14:38.447 00:14:38.447 22:25:51 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:14:38.447 Initializing NVMe Controllers 00:14:38.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:38.447 Controller IO queue size 128, less than required. 00:14:38.447 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:38.447 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:38.447 Controller IO queue size 128, less than required. 00:14:38.447 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:38.447 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:38.447 WARNING: Some requested NVMe devices were skipped 00:14:38.447 No valid NVMe controllers or AIO or URING devices found 00:14:38.447 22:25:52 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:14:40.982 Initializing NVMe Controllers 00:14:40.982 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:40.982 Controller IO queue size 128, less than required. 00:14:40.982 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:40.982 Controller IO queue size 128, less than required. 00:14:40.982 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:40.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:40.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:40.982 Initialization complete. Launching workers. 00:14:40.982 00:14:40.982 ==================== 00:14:40.982 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:40.982 TCP transport: 00:14:40.982 polls: 12388 00:14:40.982 idle_polls: 7193 00:14:40.982 sock_completions: 5195 00:14:40.982 nvme_completions: 8537 00:14:40.982 submitted_requests: 12828 00:14:40.982 queued_requests: 1 00:14:40.982 00:14:40.982 ==================== 00:14:40.982 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:40.982 TCP transport: 00:14:40.982 polls: 14652 00:14:40.982 idle_polls: 8953 00:14:40.982 sock_completions: 5699 00:14:40.982 nvme_completions: 8307 00:14:40.982 submitted_requests: 12414 00:14:40.982 queued_requests: 1 00:14:40.982 ======================================================== 00:14:40.982 Latency(us) 00:14:40.982 Device Information : IOPS MiB/s Average min max 00:14:40.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2133.99 533.50 61386.60 27549.32 105623.82 00:14:40.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2076.49 519.12 61809.52 28165.45 100244.85 00:14:40.982 ======================================================== 00:14:40.982 Total : 4210.47 1052.62 61595.17 27549.32 105623.82 00:14:40.982 00:14:40.982 22:25:54 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:14:40.982 22:25:54 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:41.244 22:25:54 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:14:41.244 22:25:54 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:41.244 22:25:54 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:14:41.244 22:25:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:41.244 22:25:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:14:41.244 22:25:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:41.244 22:25:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:14:41.244 22:25:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:41.244 22:25:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:41.244 rmmod nvme_tcp 00:14:41.244 rmmod nvme_fabrics 00:14:41.244 rmmod nvme_keyring 00:14:41.244 22:25:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:41.244 22:25:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:14:41.244 22:25:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:14:41.244 22:25:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 74871 ']' 00:14:41.244 22:25:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 74871 00:14:41.244 22:25:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 74871 ']' 00:14:41.244 22:25:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 74871 00:14:41.244 22:25:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:14:41.244 22:25:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:41.244 22:25:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74871 00:14:41.244 killing process with pid 74871 00:14:41.244 22:25:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:41.244 22:25:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:41.244 22:25:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74871' 00:14:41.244 22:25:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 74871 00:14:41.244 22:25:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 74871 00:14:42.179 22:25:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:42.179 22:25:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:42.179 22:25:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:42.179 22:25:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:42.179 22:25:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:42.179 22:25:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.179 22:25:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:42.179 22:25:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.179 22:25:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:42.179 ************************************ 00:14:42.179 END TEST nvmf_perf 00:14:42.179 ************************************ 00:14:42.179 00:14:42.179 real 0m13.641s 00:14:42.179 user 0m49.042s 00:14:42.179 sys 0m4.104s 00:14:42.179 22:25:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:42.179 22:25:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:42.179 22:25:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:42.179 22:25:55 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:42.179 22:25:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:42.179 22:25:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:42.179 22:25:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:42.179 ************************************ 00:14:42.179 START TEST nvmf_fio_host 00:14:42.179 ************************************ 00:14:42.179 22:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:42.179 * Looking for test storage... 00:14:42.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:42.438 22:25:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:42.438 22:25:55 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.438 22:25:55 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.438 22:25:55 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.438 22:25:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.438 22:25:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.438 22:25:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.438 22:25:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:42.438 22:25:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.438 22:25:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:42.438 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:14:42.438 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:42.439 Cannot find device "nvmf_tgt_br" 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:42.439 Cannot find device "nvmf_tgt_br2" 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:42.439 Cannot find device "nvmf_tgt_br" 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:42.439 Cannot find device "nvmf_tgt_br2" 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:14:42.439 22:25:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:42.439 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:42.439 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:42.439 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:42.439 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:14:42.439 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:42.439 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:42.439 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:14:42.439 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:42.439 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:42.439 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:42.698 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:42.698 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:42.698 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:42.698 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:42.698 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:42.698 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:42.698 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:42.698 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:42.698 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:42.698 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:42.698 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:42.698 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:42.698 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:42.698 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:42.698 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:42.698 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:42.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:42.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:14:42.699 00:14:42.699 --- 10.0.0.2 ping statistics --- 00:14:42.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.699 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:42.699 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:42.699 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:14:42.699 00:14:42.699 --- 10.0.0.3 ping statistics --- 00:14:42.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.699 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:42.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:42.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:14:42.699 00:14:42.699 --- 10.0.0.1 ping statistics --- 00:14:42.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.699 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75271 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75271 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 75271 ']' 00:14:42.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:42.699 22:25:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:42.968 [2024-07-15 22:25:56.340929] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:14:42.968 [2024-07-15 22:25:56.340997] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.968 [2024-07-15 22:25:56.482993] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:42.968 [2024-07-15 22:25:56.574234] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.968 [2024-07-15 22:25:56.574278] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.968 [2024-07-15 22:25:56.574288] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:42.968 [2024-07-15 22:25:56.574296] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:42.968 [2024-07-15 22:25:56.574303] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.968 [2024-07-15 22:25:56.574417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.968 [2024-07-15 22:25:56.574696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.968 [2024-07-15 22:25:56.575245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.968 [2024-07-15 22:25:56.575245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:43.226 [2024-07-15 22:25:56.616303] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:43.792 22:25:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:43.792 22:25:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:14:43.792 22:25:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:43.792 [2024-07-15 22:25:57.351590] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.792 22:25:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:14:43.792 22:25:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:43.792 22:25:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:44.050 22:25:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:44.050 Malloc1 00:14:44.050 22:25:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:44.308 22:25:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:44.566 22:25:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:44.825 [2024-07-15 22:25:58.200605] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.825 22:25:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:44.825 22:25:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:44.825 22:25:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:44.825 22:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:44.825 22:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:44.825 22:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:44.825 22:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:44.825 22:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:44.825 22:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:14:44.825 22:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:44.825 22:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:44.825 22:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:44.825 22:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:44.825 22:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:14:44.825 22:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:44.825 22:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:44.825 22:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:44.825 22:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:44.825 22:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:14:44.825 22:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:45.084 22:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:45.084 22:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:45.084 22:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:45.084 22:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:45.084 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:45.084 fio-3.35 00:14:45.084 Starting 1 thread 00:14:47.609 00:14:47.609 test: (groupid=0, jobs=1): err= 0: pid=75343: Mon Jul 15 22:26:00 2024 00:14:47.609 read: IOPS=11.8k, BW=46.1MiB/s (48.4MB/s)(92.5MiB/2005msec) 00:14:47.609 slat (nsec): min=1549, max=317026, avg=1696.65, stdev=2571.44 00:14:47.609 clat (usec): min=2324, max=9849, avg=5651.97, stdev=365.70 00:14:47.609 lat (usec): min=2360, max=9850, avg=5653.66, stdev=365.61 00:14:47.609 clat percentiles (usec): 00:14:47.609 | 1.00th=[ 4883], 5.00th=[ 5080], 10.00th=[ 5211], 20.00th=[ 5407], 00:14:47.609 | 30.00th=[ 5473], 40.00th=[ 5538], 50.00th=[ 5669], 60.00th=[ 5735], 00:14:47.609 | 70.00th=[ 5800], 80.00th=[ 5932], 90.00th=[ 6063], 95.00th=[ 6194], 00:14:47.609 | 99.00th=[ 6521], 99.50th=[ 6652], 99.90th=[ 8029], 99.95th=[ 9372], 00:14:47.609 | 99.99th=[ 9765] 00:14:47.609 bw ( KiB/s): min=46120, max=48039, per=99.96%, avg=47239.75, stdev=876.27, samples=4 00:14:47.609 iops : min=11530, max=12009, avg=11809.75, stdev=218.84, samples=4 00:14:47.609 write: IOPS=11.8k, BW=45.9MiB/s (48.2MB/s)(92.1MiB/2005msec); 0 zone resets 00:14:47.609 slat (nsec): min=1597, max=212825, avg=1735.86, stdev=1525.71 00:14:47.609 clat (usec): min=2208, max=9869, avg=5139.18, stdev=337.82 00:14:47.609 lat (usec): min=2220, max=9871, avg=5140.91, stdev=337.84 00:14:47.609 clat percentiles (usec): 00:14:47.609 | 1.00th=[ 4424], 5.00th=[ 4686], 10.00th=[ 4752], 20.00th=[ 4883], 00:14:47.609 | 30.00th=[ 5014], 40.00th=[ 5080], 50.00th=[ 5145], 60.00th=[ 5211], 00:14:47.609 | 70.00th=[ 5276], 80.00th=[ 5342], 90.00th=[ 5538], 95.00th=[ 5604], 00:14:47.609 | 99.00th=[ 5932], 99.50th=[ 6063], 99.90th=[ 8225], 99.95th=[ 9241], 00:14:47.609 | 99.99th=[ 9765] 00:14:47.609 bw ( KiB/s): min=46568, max=47360, per=99.90%, avg=46986.50, stdev=359.92, samples=4 00:14:47.609 iops : min=11642, max=11840, avg=11746.50, stdev=90.06, samples=4 00:14:47.609 lat (msec) : 4=0.15%, 10=99.85% 00:14:47.609 cpu : usr=68.41%, sys=24.90%, ctx=12, majf=0, minf=6 00:14:47.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:14:47.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:47.609 issued rwts: total=23688,23576,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:47.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:47.609 00:14:47.609 Run status group 0 (all jobs): 00:14:47.609 READ: bw=46.1MiB/s (48.4MB/s), 46.1MiB/s-46.1MiB/s (48.4MB/s-48.4MB/s), io=92.5MiB (97.0MB), run=2005-2005msec 00:14:47.609 WRITE: bw=45.9MiB/s (48.2MB/s), 45.9MiB/s-45.9MiB/s (48.2MB/s-48.2MB/s), io=92.1MiB (96.6MB), run=2005-2005msec 00:14:47.609 22:26:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:47.609 22:26:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:47.609 22:26:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:47.609 22:26:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:47.609 22:26:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:47.609 22:26:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:47.609 22:26:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:14:47.609 22:26:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:47.609 22:26:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:47.609 22:26:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:47.609 22:26:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:14:47.609 22:26:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:47.609 22:26:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:47.609 22:26:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:47.609 22:26:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:47.609 22:26:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:47.609 22:26:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:14:47.609 22:26:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:47.609 22:26:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:47.609 22:26:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:47.609 22:26:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:47.609 22:26:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:47.609 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:14:47.609 fio-3.35 00:14:47.609 Starting 1 thread 00:14:50.145 00:14:50.145 test: (groupid=0, jobs=1): err= 0: pid=75397: Mon Jul 15 22:26:03 2024 00:14:50.145 read: IOPS=10.8k, BW=169MiB/s (178MB/s)(339MiB/2002msec) 00:14:50.145 slat (usec): min=2, max=123, avg= 2.81, stdev= 1.56 00:14:50.145 clat (usec): min=2221, max=17414, avg=6591.54, stdev=2103.69 00:14:50.145 lat (usec): min=2224, max=17423, avg=6594.35, stdev=2103.85 00:14:50.145 clat percentiles (usec): 00:14:50.145 | 1.00th=[ 3097], 5.00th=[ 3687], 10.00th=[ 4113], 20.00th=[ 4752], 00:14:50.145 | 30.00th=[ 5342], 40.00th=[ 5800], 50.00th=[ 6259], 60.00th=[ 6783], 00:14:50.145 | 70.00th=[ 7504], 80.00th=[ 8225], 90.00th=[ 9503], 95.00th=[10683], 00:14:50.145 | 99.00th=[12256], 99.50th=[12911], 99.90th=[13566], 99.95th=[13698], 00:14:50.145 | 99.99th=[17433] 00:14:50.145 bw ( KiB/s): min=82464, max=93184, per=49.93%, avg=86640.00, stdev=4588.14, samples=4 00:14:50.145 iops : min= 5154, max= 5824, avg=5415.00, stdev=286.76, samples=4 00:14:50.145 write: IOPS=6277, BW=98.1MiB/s (103MB/s)(178MiB/1813msec); 0 zone resets 00:14:50.145 slat (usec): min=28, max=435, avg=30.95, stdev= 9.68 00:14:50.145 clat (usec): min=2078, max=21091, avg=9176.97, stdev=1975.72 00:14:50.145 lat (usec): min=2107, max=21122, avg=9207.92, stdev=1979.45 00:14:50.145 clat percentiles (usec): 00:14:50.145 | 1.00th=[ 5866], 5.00th=[ 6718], 10.00th=[ 7111], 20.00th=[ 7635], 00:14:50.145 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9372], 00:14:50.145 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[11469], 95.00th=[12125], 00:14:50.145 | 99.00th=[16712], 99.50th=[19268], 99.90th=[20055], 99.95th=[20317], 00:14:50.145 | 99.99th=[20841] 00:14:50.145 bw ( KiB/s): min=87456, max=97280, per=90.20%, avg=90592.00, stdev=4508.78, samples=4 00:14:50.145 iops : min= 5466, max= 6080, avg=5662.00, stdev=281.80, samples=4 00:14:50.145 lat (msec) : 4=5.69%, 10=80.00%, 20=14.22%, 50=0.09% 00:14:50.145 cpu : usr=81.02%, sys=14.84%, ctx=5, majf=0, minf=14 00:14:50.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:14:50.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:50.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:50.145 issued rwts: total=21710,11381,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:50.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:50.145 00:14:50.145 Run status group 0 (all jobs): 00:14:50.145 READ: bw=169MiB/s (178MB/s), 169MiB/s-169MiB/s (178MB/s-178MB/s), io=339MiB (356MB), run=2002-2002msec 00:14:50.145 WRITE: bw=98.1MiB/s (103MB/s), 98.1MiB/s-98.1MiB/s (103MB/s-103MB/s), io=178MiB (186MB), run=1813-1813msec 00:14:50.145 22:26:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:50.145 22:26:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:14:50.145 22:26:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:50.145 22:26:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:14:50.145 22:26:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:14:50.145 22:26:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:50.145 22:26:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:14:50.145 22:26:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:50.145 22:26:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:14:50.145 22:26:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:50.145 22:26:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:50.145 rmmod nvme_tcp 00:14:50.145 rmmod nvme_fabrics 00:14:50.145 rmmod nvme_keyring 00:14:50.145 22:26:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:50.145 22:26:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:14:50.145 22:26:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:14:50.145 22:26:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 75271 ']' 00:14:50.145 22:26:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 75271 00:14:50.145 22:26:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 75271 ']' 00:14:50.145 22:26:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 75271 00:14:50.145 22:26:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:14:50.145 22:26:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:50.145 22:26:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75271 00:14:50.145 22:26:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:50.145 killing process with pid 75271 00:14:50.146 22:26:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:50.146 22:26:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75271' 00:14:50.146 22:26:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 75271 00:14:50.146 22:26:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 75271 00:14:50.404 22:26:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:50.404 22:26:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:50.404 22:26:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:50.404 22:26:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:50.404 22:26:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:50.404 22:26:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.404 22:26:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:50.404 22:26:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.404 22:26:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:50.661 00:14:50.661 real 0m8.382s 00:14:50.661 user 0m33.274s 00:14:50.661 sys 0m2.598s 00:14:50.661 22:26:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:50.661 22:26:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:50.661 ************************************ 00:14:50.661 END TEST nvmf_fio_host 00:14:50.661 ************************************ 00:14:50.661 22:26:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:50.661 22:26:04 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:50.661 22:26:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:50.661 22:26:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:50.661 22:26:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:50.661 ************************************ 00:14:50.661 START TEST nvmf_failover 00:14:50.661 ************************************ 00:14:50.661 22:26:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:50.661 * Looking for test storage... 00:14:50.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:50.661 22:26:04 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:50.661 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:14:50.661 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.661 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.661 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.661 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.661 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.661 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.661 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.661 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.661 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.661 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.661 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:14:50.661 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:14:50.661 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.661 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.661 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:50.661 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.661 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:50.661 22:26:04 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.661 22:26:04 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.661 22:26:04 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.661 22:26:04 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.662 22:26:04 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.662 22:26:04 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.662 22:26:04 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:14:50.662 22:26:04 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.662 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:14:50.662 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:50.662 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:50.662 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.919 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.919 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.919 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:50.919 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:50.919 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:50.919 22:26:04 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:50.919 22:26:04 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:50.919 22:26:04 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:50.919 22:26:04 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:50.919 22:26:04 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:14:50.919 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:50.919 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.919 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:50.919 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:50.919 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:50.919 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.919 22:26:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:50.919 22:26:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:50.920 Cannot find device "nvmf_tgt_br" 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:50.920 Cannot find device "nvmf_tgt_br2" 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:50.920 Cannot find device "nvmf_tgt_br" 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:50.920 Cannot find device "nvmf_tgt_br2" 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:50.920 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:50.920 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:50.920 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:51.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:14:51.178 00:14:51.178 --- 10.0.0.2 ping statistics --- 00:14:51.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.178 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:51.178 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:51.178 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:14:51.178 00:14:51.178 --- 10.0.0.3 ping statistics --- 00:14:51.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.178 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:51.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:14:51.178 00:14:51.178 --- 10.0.0.1 ping statistics --- 00:14:51.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.178 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:51.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=75610 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 75610 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75610 ']' 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:51.178 22:26:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:51.179 22:26:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:51.179 [2024-07-15 22:26:04.759105] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:14:51.179 [2024-07-15 22:26:04.759163] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.436 [2024-07-15 22:26:04.903794] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:51.436 [2024-07-15 22:26:04.989525] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.436 [2024-07-15 22:26:04.989564] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.436 [2024-07-15 22:26:04.989573] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.436 [2024-07-15 22:26:04.989582] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.436 [2024-07-15 22:26:04.989589] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.436 [2024-07-15 22:26:04.989785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.436 [2024-07-15 22:26:04.990497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.436 [2024-07-15 22:26:04.990494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:51.436 [2024-07-15 22:26:05.032051] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:52.001 22:26:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:52.001 22:26:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:14:52.001 22:26:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:52.001 22:26:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:52.001 22:26:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:52.272 22:26:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.272 22:26:05 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:52.272 [2024-07-15 22:26:05.811745] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.272 22:26:05 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:52.542 Malloc0 00:14:52.542 22:26:06 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:52.799 22:26:06 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:52.799 22:26:06 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:53.056 [2024-07-15 22:26:06.538628] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:53.056 22:26:06 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:53.313 [2024-07-15 22:26:06.730411] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:53.313 22:26:06 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:53.313 [2024-07-15 22:26:06.918272] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:14:53.313 22:26:06 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75662 00:14:53.313 22:26:06 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:14:53.313 22:26:06 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:53.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:53.313 22:26:06 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75662 /var/tmp/bdevperf.sock 00:14:53.313 22:26:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75662 ']' 00:14:53.313 22:26:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:53.313 22:26:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:53.313 22:26:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:53.313 22:26:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:53.314 22:26:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:54.245 22:26:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.245 22:26:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:14:54.245 22:26:07 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:54.502 NVMe0n1 00:14:54.503 22:26:08 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:54.760 00:14:54.760 22:26:08 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75686 00:14:54.760 22:26:08 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:54.760 22:26:08 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:14:56.133 22:26:09 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:56.133 22:26:09 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:14:59.417 22:26:12 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:59.417 00:14:59.417 22:26:12 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:59.417 22:26:12 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:02.704 22:26:15 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:02.704 [2024-07-15 22:26:16.164890] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.704 22:26:16 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:03.637 22:26:17 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:03.895 22:26:17 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 75686 00:15:10.498 0 00:15:10.498 22:26:23 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 75662 00:15:10.498 22:26:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75662 ']' 00:15:10.498 22:26:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75662 00:15:10.498 22:26:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:15:10.498 22:26:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:10.498 22:26:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75662 00:15:10.498 killing process with pid 75662 00:15:10.498 22:26:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:10.498 22:26:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:10.498 22:26:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75662' 00:15:10.498 22:26:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75662 00:15:10.498 22:26:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75662 00:15:10.499 22:26:23 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:10.499 [2024-07-15 22:26:06.986298] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:15:10.499 [2024-07-15 22:26:06.986379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75662 ] 00:15:10.499 [2024-07-15 22:26:07.127037] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.499 [2024-07-15 22:26:07.213642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.499 [2024-07-15 22:26:07.254251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:10.499 Running I/O for 15 seconds... 00:15:10.499 [2024-07-15 22:26:09.506329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.506389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.506413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.506426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.506440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.506453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.506467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.506480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.506494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.506506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.506520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.506532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.506546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.506558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.506572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.506584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.506607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.506620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.506634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.506646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.506660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.506696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.506710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.506723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.506736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.506749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.506762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.506775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.506789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.506801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.506814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.506827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.506841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.499 [2024-07-15 22:26:09.506854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.506872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.499 [2024-07-15 22:26:09.506884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.506898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.499 [2024-07-15 22:26:09.506910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.506924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.499 [2024-07-15 22:26:09.506936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.506951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.499 [2024-07-15 22:26:09.506963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.506977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.499 [2024-07-15 22:26:09.506989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.507002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.499 [2024-07-15 22:26:09.507014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.507033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.499 [2024-07-15 22:26:09.507046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.507060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.507072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.507085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.507097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.507111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.507125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.507139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.507151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.507165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.507177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.507191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.507203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.507217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.507229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.507243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.507255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.507269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.507281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.507296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.507308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.499 [2024-07-15 22:26:09.507322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.499 [2024-07-15 22:26:09.507334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.507348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.500 [2024-07-15 22:26:09.507360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.507380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.500 [2024-07-15 22:26:09.507392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.507407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.500 [2024-07-15 22:26:09.507419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.507433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.500 [2024-07-15 22:26:09.507445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.507459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.500 [2024-07-15 22:26:09.507472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.507485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.500 [2024-07-15 22:26:09.507497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.507511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.500 [2024-07-15 22:26:09.507524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.507537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.500 [2024-07-15 22:26:09.507550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.507563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.500 [2024-07-15 22:26:09.507575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.507589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.500 [2024-07-15 22:26:09.507611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.507624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.500 [2024-07-15 22:26:09.507637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.507651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.500 [2024-07-15 22:26:09.507663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.507677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.500 [2024-07-15 22:26:09.507689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.507703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.500 [2024-07-15 22:26:09.507720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.507735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.500 [2024-07-15 22:26:09.507747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.507761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.500 [2024-07-15 22:26:09.507774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.507788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.500 [2024-07-15 22:26:09.507800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.507814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.500 [2024-07-15 22:26:09.507826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.507840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.500 [2024-07-15 22:26:09.507859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.507873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.500 [2024-07-15 22:26:09.507885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.507899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.500 [2024-07-15 22:26:09.507911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.507925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.500 [2024-07-15 22:26:09.507937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.507951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.500 [2024-07-15 22:26:09.507964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.507977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.500 [2024-07-15 22:26:09.507989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.508003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.500 [2024-07-15 22:26:09.508015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.508029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.500 [2024-07-15 22:26:09.508041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.508059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.500 [2024-07-15 22:26:09.508071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.508085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.500 [2024-07-15 22:26:09.508097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.508111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.500 [2024-07-15 22:26:09.508123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.508138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.500 [2024-07-15 22:26:09.508150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.508165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.500 [2024-07-15 22:26:09.508177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.508191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.500 [2024-07-15 22:26:09.508203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.508217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.500 [2024-07-15 22:26:09.508229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.508243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.500 [2024-07-15 22:26:09.508255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.508269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.500 [2024-07-15 22:26:09.508281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.508295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.500 [2024-07-15 22:26:09.508307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.508321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.500 [2024-07-15 22:26:09.508333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.500 [2024-07-15 22:26:09.508347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.500 [2024-07-15 22:26:09.508359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.508373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.501 [2024-07-15 22:26:09.508389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.508403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.501 [2024-07-15 22:26:09.508415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.508428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.501 [2024-07-15 22:26:09.508441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.508454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.501 [2024-07-15 22:26:09.508467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.508480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.501 [2024-07-15 22:26:09.508493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.508507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.501 [2024-07-15 22:26:09.508519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.508532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.501 [2024-07-15 22:26:09.508545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.508559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.501 [2024-07-15 22:26:09.508571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.508586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.501 [2024-07-15 22:26:09.508606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.508620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.501 [2024-07-15 22:26:09.508632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.508646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.501 [2024-07-15 22:26:09.508659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.508672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.501 [2024-07-15 22:26:09.508685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.508699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.501 [2024-07-15 22:26:09.508712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.508725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.501 [2024-07-15 22:26:09.508742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.508756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.501 [2024-07-15 22:26:09.508768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.508781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.501 [2024-07-15 22:26:09.508794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.508808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.501 [2024-07-15 22:26:09.508820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.508834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.501 [2024-07-15 22:26:09.508846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.508859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.501 [2024-07-15 22:26:09.508872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.508886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.501 [2024-07-15 22:26:09.508898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.508912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.501 [2024-07-15 22:26:09.508924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.508938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.501 [2024-07-15 22:26:09.508950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.508964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.501 [2024-07-15 22:26:09.508981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.508994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.501 [2024-07-15 22:26:09.509007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.509022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.501 [2024-07-15 22:26:09.509034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.509048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.501 [2024-07-15 22:26:09.509060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.509078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.501 [2024-07-15 22:26:09.509091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.509104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.501 [2024-07-15 22:26:09.509116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.509130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.501 [2024-07-15 22:26:09.509143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.509157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.501 [2024-07-15 22:26:09.509169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.509183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.501 [2024-07-15 22:26:09.509195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.509209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.501 [2024-07-15 22:26:09.509221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.509234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146bd80 is same with the state(5) to be set 00:15:10.501 [2024-07-15 22:26:09.509251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.501 [2024-07-15 22:26:09.509260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.501 [2024-07-15 22:26:09.509269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96056 len:8 PRP1 0x0 PRP2 0x0 00:15:10.501 [2024-07-15 22:26:09.509282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.509295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.501 [2024-07-15 22:26:09.509304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.501 [2024-07-15 22:26:09.509314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96656 len:8 PRP1 0x0 PRP2 0x0 00:15:10.501 [2024-07-15 22:26:09.509326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.501 [2024-07-15 22:26:09.509346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.501 [2024-07-15 22:26:09.509355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.501 [2024-07-15 22:26:09.509364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96664 len:8 PRP1 0x0 PRP2 0x0 00:15:10.501 [2024-07-15 22:26:09.509376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.502 [2024-07-15 22:26:09.509390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.502 [2024-07-15 22:26:09.509399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.502 [2024-07-15 22:26:09.509408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96672 len:8 PRP1 0x0 PRP2 0x0 00:15:10.502 [2024-07-15 22:26:09.509426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.502 [2024-07-15 22:26:09.509439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.502 [2024-07-15 22:26:09.509448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.502 [2024-07-15 22:26:09.509458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96680 len:8 PRP1 0x0 PRP2 0x0 00:15:10.502 [2024-07-15 22:26:09.509470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.502 [2024-07-15 22:26:09.509483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.502 [2024-07-15 22:26:09.509492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.502 [2024-07-15 22:26:09.509501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96688 len:8 PRP1 0x0 PRP2 0x0 00:15:10.502 [2024-07-15 22:26:09.509513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.502 [2024-07-15 22:26:09.509525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.502 [2024-07-15 22:26:09.509534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.502 [2024-07-15 22:26:09.509543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96696 len:8 PRP1 0x0 PRP2 0x0 00:15:10.502 [2024-07-15 22:26:09.509555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.502 [2024-07-15 22:26:09.509568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.502 [2024-07-15 22:26:09.509577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.502 [2024-07-15 22:26:09.509586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96704 len:8 PRP1 0x0 PRP2 0x0 00:15:10.502 [2024-07-15 22:26:09.509605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.502 [2024-07-15 22:26:09.509618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.502 [2024-07-15 22:26:09.509626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.502 [2024-07-15 22:26:09.509636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96712 len:8 PRP1 0x0 PRP2 0x0 00:15:10.502 [2024-07-15 22:26:09.509648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.502 [2024-07-15 22:26:09.509660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.502 [2024-07-15 22:26:09.509669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.502 [2024-07-15 22:26:09.509679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96720 len:8 PRP1 0x0 PRP2 0x0 00:15:10.502 [2024-07-15 22:26:09.509691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.502 [2024-07-15 22:26:09.509703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.502 [2024-07-15 22:26:09.509712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.502 [2024-07-15 22:26:09.509721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96728 len:8 PRP1 0x0 PRP2 0x0 00:15:10.502 [2024-07-15 22:26:09.509733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.502 [2024-07-15 22:26:09.509746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.502 [2024-07-15 22:26:09.509755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.502 [2024-07-15 22:26:09.509769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96736 len:8 PRP1 0x0 PRP2 0x0 00:15:10.502 [2024-07-15 22:26:09.509783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.502 [2024-07-15 22:26:09.509795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.502 [2024-07-15 22:26:09.509804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.502 [2024-07-15 22:26:09.509813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96744 len:8 PRP1 0x0 PRP2 0x0 00:15:10.502 [2024-07-15 22:26:09.509826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.502 [2024-07-15 22:26:09.509838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.502 [2024-07-15 22:26:09.509847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.502 [2024-07-15 22:26:09.509856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96752 len:8 PRP1 0x0 PRP2 0x0 00:15:10.502 [2024-07-15 22:26:09.509869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.502 [2024-07-15 22:26:09.509881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.502 [2024-07-15 22:26:09.509890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.502 [2024-07-15 22:26:09.509899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96760 len:8 PRP1 0x0 PRP2 0x0 00:15:10.502 [2024-07-15 22:26:09.509911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.502 [2024-07-15 22:26:09.509924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.502 [2024-07-15 22:26:09.509933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.502 [2024-07-15 22:26:09.509942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96064 len:8 PRP1 0x0 PRP2 0x0 00:15:10.502 [2024-07-15 22:26:09.509954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.502 [2024-07-15 22:26:09.509967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.502 [2024-07-15 22:26:09.509976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.502 [2024-07-15 22:26:09.509985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96072 len:8 PRP1 0x0 PRP2 0x0 00:15:10.502 [2024-07-15 22:26:09.509997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.502 [2024-07-15 22:26:09.510009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.502 [2024-07-15 22:26:09.510019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.502 [2024-07-15 22:26:09.510028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96080 len:8 PRP1 0x0 PRP2 0x0 00:15:10.502 [2024-07-15 22:26:09.510040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.502 [2024-07-15 22:26:09.510053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.502 [2024-07-15 22:26:09.510062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.502 [2024-07-15 22:26:09.510071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96088 len:8 PRP1 0x0 PRP2 0x0 00:15:10.502 [2024-07-15 22:26:09.510083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.502 [2024-07-15 22:26:09.510097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.502 [2024-07-15 22:26:09.510110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.502 [2024-07-15 22:26:09.510120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96096 len:8 PRP1 0x0 PRP2 0x0 00:15:10.502 [2024-07-15 22:26:09.510133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.502 [2024-07-15 22:26:09.510145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.502 [2024-07-15 22:26:09.510155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.502 [2024-07-15 22:26:09.510164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96104 len:8 PRP1 0x0 PRP2 0x0 00:15:10.502 [2024-07-15 22:26:09.510176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.502 [2024-07-15 22:26:09.510189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.502 [2024-07-15 22:26:09.510198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.502 [2024-07-15 22:26:09.510207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96112 len:8 PRP1 0x0 PRP2 0x0 00:15:10.502 [2024-07-15 22:26:09.510219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.502 [2024-07-15 22:26:09.510232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.502 [2024-07-15 22:26:09.510241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.502 [2024-07-15 22:26:09.510250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96120 len:8 PRP1 0x0 PRP2 0x0 00:15:10.502 [2024-07-15 22:26:09.510262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.502 [2024-07-15 22:26:09.510313] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x146bd80 was disconnected and freed. reset controller. 00:15:10.502 [2024-07-15 22:26:09.510328] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:10.502 [2024-07-15 22:26:09.510377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.502 [2024-07-15 22:26:09.510391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.502 [2024-07-15 22:26:09.510406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.502 [2024-07-15 22:26:09.510418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:09.510431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.503 [2024-07-15 22:26:09.510443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:09.510456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.503 [2024-07-15 22:26:09.524248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:09.524292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:10.503 [2024-07-15 22:26:09.524370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140b710 (9): Bad file descriptor 00:15:10.503 [2024-07-15 22:26:09.528299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:10.503 [2024-07-15 22:26:09.564538] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:10.503 [2024-07-15 22:26:12.970707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.503 [2024-07-15 22:26:12.970767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:12.970783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.503 [2024-07-15 22:26:12.970796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:12.970809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.503 [2024-07-15 22:26:12.970822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:12.970834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.503 [2024-07-15 22:26:12.970847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:12.970859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b710 is same with the state(5) to be set 00:15:10.503 [2024-07-15 22:26:12.970919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.503 [2024-07-15 22:26:12.970934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:12.970954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.503 [2024-07-15 22:26:12.970966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:12.970980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.503 [2024-07-15 22:26:12.970993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:12.971006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.503 [2024-07-15 22:26:12.971019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:12.971033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.503 [2024-07-15 22:26:12.971045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:12.971059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.503 [2024-07-15 22:26:12.971072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:12.971086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.503 [2024-07-15 22:26:12.971098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:12.971112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.503 [2024-07-15 22:26:12.971124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:12.971162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.503 [2024-07-15 22:26:12.971174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:12.971188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.503 [2024-07-15 22:26:12.971201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:12.971214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.503 [2024-07-15 22:26:12.971227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:12.971241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.503 [2024-07-15 22:26:12.971254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:12.971267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.503 [2024-07-15 22:26:12.971282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:12.971296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.503 [2024-07-15 22:26:12.971309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:12.971322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.503 [2024-07-15 22:26:12.971335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:12.971348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.503 [2024-07-15 22:26:12.971360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:12.971374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:44080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.503 [2024-07-15 22:26:12.971386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:12.971400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.503 [2024-07-15 22:26:12.971412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:12.971426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.503 [2024-07-15 22:26:12.971438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:12.971451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.503 [2024-07-15 22:26:12.971463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:12.971477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.503 [2024-07-15 22:26:12.971494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.503 [2024-07-15 22:26:12.971508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.503 [2024-07-15 22:26:12.971520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.971534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.504 [2024-07-15 22:26:12.971546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.971560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.504 [2024-07-15 22:26:12.971572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.971585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.504 [2024-07-15 22:26:12.971610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.971624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.504 [2024-07-15 22:26:12.971636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.971650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.504 [2024-07-15 22:26:12.971662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.971676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.504 [2024-07-15 22:26:12.971688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.971702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.504 [2024-07-15 22:26:12.971715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.971729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.504 [2024-07-15 22:26:12.971741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.971755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.504 [2024-07-15 22:26:12.971768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.971781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.504 [2024-07-15 22:26:12.971793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.971807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.504 [2024-07-15 22:26:12.971820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.971833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.504 [2024-07-15 22:26:12.971850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.971864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.504 [2024-07-15 22:26:12.971877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.971890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:44232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.504 [2024-07-15 22:26:12.971903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.971917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.504 [2024-07-15 22:26:12.971930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.971943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.504 [2024-07-15 22:26:12.971956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.971969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.504 [2024-07-15 22:26:12.971982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.971995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.504 [2024-07-15 22:26:12.972007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.972021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.504 [2024-07-15 22:26:12.972033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.972047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.504 [2024-07-15 22:26:12.972059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.972072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.504 [2024-07-15 22:26:12.972085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.972098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.504 [2024-07-15 22:26:12.972111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.972125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.504 [2024-07-15 22:26:12.972138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.972151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.504 [2024-07-15 22:26:12.972163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.972181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.504 [2024-07-15 22:26:12.972194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.972207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.504 [2024-07-15 22:26:12.972220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.972234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.504 [2024-07-15 22:26:12.972246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.972260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.504 [2024-07-15 22:26:12.972272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.972286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.504 [2024-07-15 22:26:12.972298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.972312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.504 [2024-07-15 22:26:12.972325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.972338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.504 [2024-07-15 22:26:12.972351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.972364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.504 [2024-07-15 22:26:12.972376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.972390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.504 [2024-07-15 22:26:12.972402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.972416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.504 [2024-07-15 22:26:12.972429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.972443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.504 [2024-07-15 22:26:12.972455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.972469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.504 [2024-07-15 22:26:12.972481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.972495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.504 [2024-07-15 22:26:12.972514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.972528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.504 [2024-07-15 22:26:12.972540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.972554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.504 [2024-07-15 22:26:12.972566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.972580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.504 [2024-07-15 22:26:12.972592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.504 [2024-07-15 22:26:12.972615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.504 [2024-07-15 22:26:12.972628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.972641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.505 [2024-07-15 22:26:12.972654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.972667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.505 [2024-07-15 22:26:12.972680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.972694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.505 [2024-07-15 22:26:12.972706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.972720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:44352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.505 [2024-07-15 22:26:12.972732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.972746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.505 [2024-07-15 22:26:12.972758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.972772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.505 [2024-07-15 22:26:12.972784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.972798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.505 [2024-07-15 22:26:12.972810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.972824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.505 [2024-07-15 22:26:12.972836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.972854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.505 [2024-07-15 22:26:12.972867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.972881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.505 [2024-07-15 22:26:12.972893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.972906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.505 [2024-07-15 22:26:12.972919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.972932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.505 [2024-07-15 22:26:12.972945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.972958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.505 [2024-07-15 22:26:12.972971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.972985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.505 [2024-07-15 22:26:12.972998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.973011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.505 [2024-07-15 22:26:12.973024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.973037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.505 [2024-07-15 22:26:12.973050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.973063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.505 [2024-07-15 22:26:12.973075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.973089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.505 [2024-07-15 22:26:12.973102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.973116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.505 [2024-07-15 22:26:12.973128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.973141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.505 [2024-07-15 22:26:12.973154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.973167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.505 [2024-07-15 22:26:12.973184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.973198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.505 [2024-07-15 22:26:12.973210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.973224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.505 [2024-07-15 22:26:12.973236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.973250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.505 [2024-07-15 22:26:12.973262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.973276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.505 [2024-07-15 22:26:12.973288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.973302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.505 [2024-07-15 22:26:12.973314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.973328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.505 [2024-07-15 22:26:12.973348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.973362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.505 [2024-07-15 22:26:12.973375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.973388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.505 [2024-07-15 22:26:12.973400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.973414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.505 [2024-07-15 22:26:12.973431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.973445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.505 [2024-07-15 22:26:12.973457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.973471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.505 [2024-07-15 22:26:12.973483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.973497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.505 [2024-07-15 22:26:12.973509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.973523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.505 [2024-07-15 22:26:12.973539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.973553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.505 [2024-07-15 22:26:12.973565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.973579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.505 [2024-07-15 22:26:12.973591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.505 [2024-07-15 22:26:12.973611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.505 [2024-07-15 22:26:12.973623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.973637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.506 [2024-07-15 22:26:12.973649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.973663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.506 [2024-07-15 22:26:12.973675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.973689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.506 [2024-07-15 22:26:12.973701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.973714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.506 [2024-07-15 22:26:12.973727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.973740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.506 [2024-07-15 22:26:12.973753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.973767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.506 [2024-07-15 22:26:12.973779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.973793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.506 [2024-07-15 22:26:12.973805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.973818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.506 [2024-07-15 22:26:12.973831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.973845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.506 [2024-07-15 22:26:12.973860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.973878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.506 [2024-07-15 22:26:12.973891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.973904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.506 [2024-07-15 22:26:12.973917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.973930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.506 [2024-07-15 22:26:12.973943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.973957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.506 [2024-07-15 22:26:12.973969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.973982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.506 [2024-07-15 22:26:12.973995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.974009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:44480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.506 [2024-07-15 22:26:12.974021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.974035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.506 [2024-07-15 22:26:12.974047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.974061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.506 [2024-07-15 22:26:12.974073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.974087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.506 [2024-07-15 22:26:12.974099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.974113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.506 [2024-07-15 22:26:12.974125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.974139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.506 [2024-07-15 22:26:12.974151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.974165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.506 [2024-07-15 22:26:12.974177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.974191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.506 [2024-07-15 22:26:12.974207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.974221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.506 [2024-07-15 22:26:12.974233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.974251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.506 [2024-07-15 22:26:12.974264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.974278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.506 [2024-07-15 22:26:12.974291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.974305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.506 [2024-07-15 22:26:12.974318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.974331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.506 [2024-07-15 22:26:12.974343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.974386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.506 [2024-07-15 22:26:12.974396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.506 [2024-07-15 22:26:12.974406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44584 len:8 PRP1 0x0 PRP2 0x0 00:15:10.506 [2024-07-15 22:26:12.974418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:12.974467] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x146dd40 was disconnected and freed. reset controller. 00:15:10.506 [2024-07-15 22:26:12.974481] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:15:10.506 [2024-07-15 22:26:12.974495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:10.506 [2024-07-15 22:26:12.977207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:10.506 [2024-07-15 22:26:12.977243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140b710 (9): Bad file descriptor 00:15:10.506 [2024-07-15 22:26:13.011233] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:10.506 [2024-07-15 22:26:17.362996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.506 [2024-07-15 22:26:17.363057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:17.363080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.506 [2024-07-15 22:26:17.363093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:17.363108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.506 [2024-07-15 22:26:17.363120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:17.363156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.506 [2024-07-15 22:26:17.363168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.506 [2024-07-15 22:26:17.363181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.506 [2024-07-15 22:26:17.363194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.507 [2024-07-15 22:26:17.363220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.507 [2024-07-15 22:26:17.363246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.507 [2024-07-15 22:26:17.363272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.507 [2024-07-15 22:26:17.363298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.507 [2024-07-15 22:26:17.363324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.507 [2024-07-15 22:26:17.363350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.507 [2024-07-15 22:26:17.363375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.507 [2024-07-15 22:26:17.363400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.507 [2024-07-15 22:26:17.363426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.507 [2024-07-15 22:26:17.363451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.507 [2024-07-15 22:26:17.363477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.507 [2024-07-15 22:26:17.363509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.507 [2024-07-15 22:26:17.363539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.507 [2024-07-15 22:26:17.363566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.507 [2024-07-15 22:26:17.363593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.507 [2024-07-15 22:26:17.363630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.507 [2024-07-15 22:26:17.363657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.507 [2024-07-15 22:26:17.363684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.507 [2024-07-15 22:26:17.363711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.507 [2024-07-15 22:26:17.363738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.507 [2024-07-15 22:26:17.363763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.507 [2024-07-15 22:26:17.363790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.507 [2024-07-15 22:26:17.363816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.507 [2024-07-15 22:26:17.363907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.507 [2024-07-15 22:26:17.363935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.507 [2024-07-15 22:26:17.363962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.363976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.507 [2024-07-15 22:26:17.363988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.364003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.507 [2024-07-15 22:26:17.364015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.364029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.507 [2024-07-15 22:26:17.364042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.364056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.507 [2024-07-15 22:26:17.364069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.364082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.507 [2024-07-15 22:26:17.364095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.364108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.507 [2024-07-15 22:26:17.364121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.364134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.507 [2024-07-15 22:26:17.364147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.364160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.507 [2024-07-15 22:26:17.364173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.364186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.507 [2024-07-15 22:26:17.364199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.364213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.507 [2024-07-15 22:26:17.364225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.364244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.507 [2024-07-15 22:26:17.364257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.364270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.507 [2024-07-15 22:26:17.364283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.507 [2024-07-15 22:26:17.364297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.508 [2024-07-15 22:26:17.364309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.508 [2024-07-15 22:26:17.364323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.508 [2024-07-15 22:26:17.364335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.508 [2024-07-15 22:26:17.364349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.508 [2024-07-15 22:26:17.364361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.508 [2024-07-15 22:26:17.364375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.508 [2024-07-15 22:26:17.364388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.508 [2024-07-15 22:26:17.364401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.508 [2024-07-15 22:26:17.364414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.508 [2024-07-15 22:26:17.364428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.508 [2024-07-15 22:26:17.364440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.508 [2024-07-15 22:26:17.364455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.508 [2024-07-15 22:26:17.364467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.508 [2024-07-15 22:26:17.364481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.508 [2024-07-15 22:26:17.364494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.508 [2024-07-15 22:26:17.364507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.508 [2024-07-15 22:26:17.364520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.508 [2024-07-15 22:26:17.364534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:85128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.508 [2024-07-15 22:26:17.364546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.364560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.509 [2024-07-15 22:26:17.364576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.364590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.509 [2024-07-15 22:26:17.364610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.364625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.509 [2024-07-15 22:26:17.364637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.364651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.509 [2024-07-15 22:26:17.364664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.364678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.509 [2024-07-15 22:26:17.364690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.364704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.509 [2024-07-15 22:26:17.364716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.364730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.509 [2024-07-15 22:26:17.364742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.364757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.509 [2024-07-15 22:26:17.364769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.364783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.509 [2024-07-15 22:26:17.364795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.364808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.509 [2024-07-15 22:26:17.364821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.364835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.509 [2024-07-15 22:26:17.364847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.364861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.509 [2024-07-15 22:26:17.364874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.364888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.509 [2024-07-15 22:26:17.364901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.364920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.509 [2024-07-15 22:26:17.364932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.364946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.509 [2024-07-15 22:26:17.364958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.364972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.509 [2024-07-15 22:26:17.364985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.364999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.509 [2024-07-15 22:26:17.365011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.365025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.509 [2024-07-15 22:26:17.365037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.365051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.509 [2024-07-15 22:26:17.365064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.365077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.509 [2024-07-15 22:26:17.365090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.365104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.509 [2024-07-15 22:26:17.365124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.365138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.509 [2024-07-15 22:26:17.365150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.365164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.509 [2024-07-15 22:26:17.365176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.365190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.509 [2024-07-15 22:26:17.365202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.365216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.509 [2024-07-15 22:26:17.365228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.365242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.509 [2024-07-15 22:26:17.365254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.365272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.509 [2024-07-15 22:26:17.365284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.509 [2024-07-15 22:26:17.365298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.509 [2024-07-15 22:26:17.365311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.365326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:85232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.365347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.365360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.365373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.365387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.365399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.365413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:85256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.365426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.365440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:85264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.365452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.365466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.365478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.365492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.365504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.365518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.510 [2024-07-15 22:26:17.365531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.365544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.510 [2024-07-15 22:26:17.365557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.365570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.510 [2024-07-15 22:26:17.365583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.365605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.510 [2024-07-15 22:26:17.365623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.365637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.510 [2024-07-15 22:26:17.365650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.365663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.510 [2024-07-15 22:26:17.365675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.365689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.510 [2024-07-15 22:26:17.365702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.365715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:10.510 [2024-07-15 22:26:17.365728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.365743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.365755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.365773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:85296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.365785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.365799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.365811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.365825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.365838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.365851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:85320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.365864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.365878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.365890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.365904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.365916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.365930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.365943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.365961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:85352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.365973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.365988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.366000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.366014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.366026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.366040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.366053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.366066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.366079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.366093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:85392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.366106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.366119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:85400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.366132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.366146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:85408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.366158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.366172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:85416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.366185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.366200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:85424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.366212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.366226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.366239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.366253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.366265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.366279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.366295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.366309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.510 [2024-07-15 22:26:17.366322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.510 [2024-07-15 22:26:17.366336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.511 [2024-07-15 22:26:17.366348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.511 [2024-07-15 22:26:17.366362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148a330 is same with the state(5) to be set 00:15:10.511 [2024-07-15 22:26:17.366377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.511 [2024-07-15 22:26:17.366386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.511 [2024-07-15 22:26:17.366396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85472 len:8 PRP1 0x0 PRP2 0x0 00:15:10.511 [2024-07-15 22:26:17.366408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.511 [2024-07-15 22:26:17.366422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.511 [2024-07-15 22:26:17.366431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.511 [2024-07-15 22:26:17.366440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85864 len:8 PRP1 0x0 PRP2 0x0 00:15:10.511 [2024-07-15 22:26:17.366453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.511 [2024-07-15 22:26:17.366465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.511 [2024-07-15 22:26:17.366475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.511 [2024-07-15 22:26:17.366484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85872 len:8 PRP1 0x0 PRP2 0x0 00:15:10.511 [2024-07-15 22:26:17.366496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.511 [2024-07-15 22:26:17.366509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.511 [2024-07-15 22:26:17.366518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.511 [2024-07-15 22:26:17.366528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85880 len:8 PRP1 0x0 PRP2 0x0 00:15:10.511 [2024-07-15 22:26:17.366540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.511 [2024-07-15 22:26:17.366552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.511 [2024-07-15 22:26:17.366561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.511 [2024-07-15 22:26:17.366571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85888 len:8 PRP1 0x0 PRP2 0x0 00:15:10.511 [2024-07-15 22:26:17.366584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.511 [2024-07-15 22:26:17.366604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.511 [2024-07-15 22:26:17.366613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.511 [2024-07-15 22:26:17.366623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85896 len:8 PRP1 0x0 PRP2 0x0 00:15:10.511 [2024-07-15 22:26:17.366635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.511 [2024-07-15 22:26:17.366652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.511 [2024-07-15 22:26:17.366661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.511 [2024-07-15 22:26:17.366670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85904 len:8 PRP1 0x0 PRP2 0x0 00:15:10.511 [2024-07-15 22:26:17.366682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.511 [2024-07-15 22:26:17.366695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.511 [2024-07-15 22:26:17.366704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.511 [2024-07-15 22:26:17.366713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85912 len:8 PRP1 0x0 PRP2 0x0 00:15:10.511 [2024-07-15 22:26:17.366725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.511 [2024-07-15 22:26:17.366737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:10.511 [2024-07-15 22:26:17.366747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:10.511 [2024-07-15 22:26:17.366756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85920 len:8 PRP1 0x0 PRP2 0x0 00:15:10.511 [2024-07-15 22:26:17.366768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.511 [2024-07-15 22:26:17.366817] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x148a330 was disconnected and freed. reset controller. 00:15:10.511 [2024-07-15 22:26:17.366832] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:15:10.511 [2024-07-15 22:26:17.366878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.511 [2024-07-15 22:26:17.366892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.511 [2024-07-15 22:26:17.366906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.511 [2024-07-15 22:26:17.366918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.511 [2024-07-15 22:26:17.366931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.511 [2024-07-15 22:26:17.366944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.511 [2024-07-15 22:26:17.366956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.511 [2024-07-15 22:26:17.366969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.511 [2024-07-15 22:26:17.366981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:10.511 [2024-07-15 22:26:17.369709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:10.511 [2024-07-15 22:26:17.369743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140b710 (9): Bad file descriptor 00:15:10.511 [2024-07-15 22:26:17.399893] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:10.511 00:15:10.511 Latency(us) 00:15:10.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.511 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:10.511 Verification LBA range: start 0x0 length 0x4000 00:15:10.511 NVMe0n1 : 15.01 11863.92 46.34 287.14 0.00 10511.56 447.43 22740.20 00:15:10.511 =================================================================================================================== 00:15:10.511 Total : 11863.92 46.34 287.14 0.00 10511.56 447.43 22740.20 00:15:10.511 Received shutdown signal, test time was about 15.000000 seconds 00:15:10.511 00:15:10.511 Latency(us) 00:15:10.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.511 =================================================================================================================== 00:15:10.511 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:10.511 22:26:23 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:10.511 22:26:23 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:10.511 22:26:23 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:10.511 22:26:23 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75859 00:15:10.511 22:26:23 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:10.511 22:26:23 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75859 /var/tmp/bdevperf.sock 00:15:10.511 22:26:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75859 ']' 00:15:10.511 22:26:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:10.511 22:26:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:10.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:10.511 22:26:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:10.511 22:26:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:10.511 22:26:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:11.076 22:26:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:11.076 22:26:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:11.076 22:26:24 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:11.076 [2024-07-15 22:26:24.693749] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:11.333 22:26:24 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:11.333 [2024-07-15 22:26:24.881562] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:11.333 22:26:24 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:11.590 NVMe0n1 00:15:11.591 22:26:25 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:11.848 00:15:11.848 22:26:25 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:12.107 00:15:12.107 22:26:25 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:12.107 22:26:25 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:12.365 22:26:25 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:12.624 22:26:26 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:15.912 22:26:29 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:15.912 22:26:29 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:15.912 22:26:29 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75930 00:15:15.912 22:26:29 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:15.912 22:26:29 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 75930 00:15:16.847 0 00:15:16.847 22:26:30 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:16.847 [2024-07-15 22:26:23.706234] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:15:16.847 [2024-07-15 22:26:23.706308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75859 ] 00:15:16.847 [2024-07-15 22:26:23.851364] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.847 [2024-07-15 22:26:23.941022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.847 [2024-07-15 22:26:23.981544] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:16.847 [2024-07-15 22:26:26.025035] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:16.847 [2024-07-15 22:26:26.025138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.847 [2024-07-15 22:26:26.025158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.847 [2024-07-15 22:26:26.025173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.847 [2024-07-15 22:26:26.025185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.847 [2024-07-15 22:26:26.025198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.847 [2024-07-15 22:26:26.025210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.847 [2024-07-15 22:26:26.025223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.847 [2024-07-15 22:26:26.025235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.847 [2024-07-15 22:26:26.025247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:16.847 [2024-07-15 22:26:26.025285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:16.847 [2024-07-15 22:26:26.025306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x696710 (9): Bad file descriptor 00:15:16.847 [2024-07-15 22:26:26.032973] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:16.847 Running I/O for 1 seconds... 00:15:16.847 00:15:16.847 Latency(us) 00:15:16.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.847 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:16.847 Verification LBA range: start 0x0 length 0x4000 00:15:16.847 NVMe0n1 : 1.01 11332.50 44.27 0.00 0.00 11234.58 1322.56 14317.91 00:15:16.847 =================================================================================================================== 00:15:16.847 Total : 11332.50 44.27 0.00 0.00 11234.58 1322.56 14317.91 00:15:16.847 22:26:30 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:16.847 22:26:30 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:17.105 22:26:30 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:17.362 22:26:30 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:17.362 22:26:30 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:17.362 22:26:30 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:17.619 22:26:31 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:20.927 22:26:34 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:20.927 22:26:34 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:20.927 22:26:34 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 75859 00:15:20.927 22:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75859 ']' 00:15:20.927 22:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75859 00:15:20.927 22:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:15:20.927 22:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:20.927 22:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75859 00:15:20.927 22:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:20.927 22:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:20.927 killing process with pid 75859 00:15:20.927 22:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75859' 00:15:20.927 22:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75859 00:15:20.927 22:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75859 00:15:21.186 22:26:34 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:21.186 22:26:34 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:21.445 22:26:34 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:21.445 22:26:34 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:21.445 22:26:34 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:21.445 22:26:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:21.445 22:26:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:15:21.445 22:26:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:21.445 22:26:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:15:21.445 22:26:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:21.445 22:26:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:21.445 rmmod nvme_tcp 00:15:21.445 rmmod nvme_fabrics 00:15:21.445 rmmod nvme_keyring 00:15:21.445 22:26:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:21.445 22:26:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:15:21.445 22:26:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:15:21.445 22:26:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 75610 ']' 00:15:21.445 22:26:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 75610 00:15:21.446 22:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75610 ']' 00:15:21.446 22:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75610 00:15:21.446 22:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:15:21.446 22:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:21.446 22:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75610 00:15:21.446 22:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:21.446 22:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:21.446 killing process with pid 75610 00:15:21.446 22:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75610' 00:15:21.446 22:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75610 00:15:21.446 22:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75610 00:15:21.704 22:26:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:21.704 22:26:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:21.704 22:26:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:21.704 22:26:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:21.704 22:26:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:21.704 22:26:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.704 22:26:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.704 22:26:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.704 22:26:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:21.704 00:15:21.704 real 0m31.056s 00:15:21.704 user 1m57.928s 00:15:21.704 sys 0m6.194s 00:15:21.704 22:26:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:21.704 22:26:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:21.704 ************************************ 00:15:21.704 END TEST nvmf_failover 00:15:21.704 ************************************ 00:15:21.704 22:26:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:21.704 22:26:35 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:21.704 22:26:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:21.704 22:26:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.704 22:26:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:21.704 ************************************ 00:15:21.704 START TEST nvmf_host_discovery 00:15:21.705 ************************************ 00:15:21.705 22:26:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:21.964 * Looking for test storage... 00:15:21.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.964 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:21.965 Cannot find device "nvmf_tgt_br" 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:21.965 Cannot find device "nvmf_tgt_br2" 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:21.965 Cannot find device "nvmf_tgt_br" 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:21.965 Cannot find device "nvmf_tgt_br2" 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:21.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:21.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:21.965 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:22.224 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:22.224 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:22.224 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:22.224 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:22.224 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:22.224 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:22.224 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:22.224 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:22.224 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:22.224 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:22.224 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:22.224 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:22.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:22.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:15:22.225 00:15:22.225 --- 10.0.0.2 ping statistics --- 00:15:22.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.225 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:22.225 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:22.225 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:15:22.225 00:15:22.225 --- 10.0.0.3 ping statistics --- 00:15:22.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.225 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:22.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:22.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:22.225 00:15:22.225 --- 10.0.0.1 ping statistics --- 00:15:22.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.225 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=76201 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 76201 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76201 ']' 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:22.225 22:26:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:22.483 [2024-07-15 22:26:35.905652] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:15:22.483 [2024-07-15 22:26:35.905720] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.483 [2024-07-15 22:26:36.048953] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.742 [2024-07-15 22:26:36.140110] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.742 [2024-07-15 22:26:36.140161] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.743 [2024-07-15 22:26:36.140170] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:22.743 [2024-07-15 22:26:36.140178] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:22.743 [2024-07-15 22:26:36.140184] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.743 [2024-07-15 22:26:36.140208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.743 [2024-07-15 22:26:36.180922] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:23.309 22:26:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:23.310 [2024-07-15 22:26:36.783659] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:23.310 [2024-07-15 22:26:36.795739] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:23.310 null0 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:23.310 null1 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76233 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76233 /tmp/host.sock 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76233 ']' 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:23.310 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:23.310 22:26:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:23.310 [2024-07-15 22:26:36.878618] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:15:23.310 [2024-07-15 22:26:36.878688] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76233 ] 00:15:23.567 [2024-07-15 22:26:37.019275] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.567 [2024-07-15 22:26:37.113250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.567 [2024-07-15 22:26:37.154104] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:24.133 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:24.133 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:15:24.133 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:24.133 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:24.133 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.133 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.133 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.133 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:24.133 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.133 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.133 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.133 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:15:24.133 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:15:24.133 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:24.133 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:24.133 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.133 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.133 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:24.133 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:24.133 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.391 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:24.391 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:15:24.391 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:24.391 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.391 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.391 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:24.391 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:24.391 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:24.391 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.391 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:15:24.391 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:24.391 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.391 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.391 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.391 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:24.392 22:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.392 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:15:24.392 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:15:24.392 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:24.392 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:24.392 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.392 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.392 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:24.392 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:24.392 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.650 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:24.650 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:24.650 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.650 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.650 [2024-07-15 22:26:38.061919] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:24.650 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.650 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:15:24.650 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:24.650 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.650 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.650 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:24.650 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:24.650 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:24.650 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.650 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:15:24.650 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:15:24.650 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:24.650 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:24.650 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:24.650 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:24.650 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.650 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.650 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:15:24.651 22:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:15:25.246 [2024-07-15 22:26:38.753094] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:25.246 [2024-07-15 22:26:38.753130] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:25.246 [2024-07-15 22:26:38.753143] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:25.246 [2024-07-15 22:26:38.759125] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:25.246 [2024-07-15 22:26:38.816201] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:25.246 [2024-07-15 22:26:38.816240] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:25.815 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:25.816 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.816 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:25.816 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.816 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:25.816 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.816 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:15:25.816 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:25.816 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:15:25.816 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:25.816 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:25.816 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:25.816 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:25.816 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:25.816 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:25.816 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:25.816 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:25.816 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:25.816 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.816 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.075 [2024-07-15 22:26:39.580816] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:26.075 [2024-07-15 22:26:39.581844] bdev_nvme.c:6970:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:26.075 [2024-07-15 22:26:39.581877] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:26.075 [2024-07-15 22:26:39.587811] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:26.075 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:26.076 [2024-07-15 22:26:39.646091] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:26.076 [2024-07-15 22:26:39.646110] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:26.076 [2024-07-15 22:26:39.646117] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:26.076 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:26.076 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:26.076 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:26.076 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.076 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.076 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:26.076 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.335 [2024-07-15 22:26:39.801690] bdev_nvme.c:6970:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:26.335 [2024-07-15 22:26:39.801719] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:26.335 [2024-07-15 22:26:39.801921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.335 [2024-07-15 22:26:39.801948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.335 [2024-07-15 22:26:39.801959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.335 [2024-07-15 22:26:39.801968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.335 [2024-07-15 22:26:39.801977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.335 [2024-07-15 22:26:39.801986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.335 [2024-07-15 22:26:39.801995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.335 [2024-07-15 22:26:39.802003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.335 [2024-07-15 22:26:39.802012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c15fa0 is same with the state(5) to be set 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:26.335 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:26.336 [2024-07-15 22:26:39.807665] bdev_nvme.c:6775:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:15:26.336 [2024-07-15 22:26:39.807690] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:26.336 [2024-07-15 22:26:39.807749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c15fa0 (9): Bad file descriptor 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:26.336 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:26.595 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:26.595 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.596 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.596 22:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:26.596 22:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.596 22:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:27.973 [2024-07-15 22:26:41.193499] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:27.973 [2024-07-15 22:26:41.193727] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:27.973 [2024-07-15 22:26:41.193784] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:27.973 [2024-07-15 22:26:41.199513] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:15:27.973 [2024-07-15 22:26:41.259644] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:27.973 [2024-07-15 22:26:41.259895] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:27.973 request: 00:15:27.973 { 00:15:27.973 "name": "nvme", 00:15:27.973 "trtype": "tcp", 00:15:27.973 "traddr": "10.0.0.2", 00:15:27.973 "adrfam": "ipv4", 00:15:27.973 "trsvcid": "8009", 00:15:27.973 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:27.973 "wait_for_attach": true, 00:15:27.973 "method": "bdev_nvme_start_discovery", 00:15:27.973 "req_id": 1 00:15:27.973 } 00:15:27.973 Got JSON-RPC error response 00:15:27.973 response: 00:15:27.973 { 00:15:27.973 "code": -17, 00:15:27.973 "message": "File exists" 00:15:27.973 } 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:27.973 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:27.974 request: 00:15:27.974 { 00:15:27.974 "name": "nvme_second", 00:15:27.974 "trtype": "tcp", 00:15:27.974 "traddr": "10.0.0.2", 00:15:27.974 "adrfam": "ipv4", 00:15:27.974 "trsvcid": "8009", 00:15:27.974 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:27.974 "wait_for_attach": true, 00:15:27.974 "method": "bdev_nvme_start_discovery", 00:15:27.974 "req_id": 1 00:15:27.974 } 00:15:27.974 Got JSON-RPC error response 00:15:27.974 response: 00:15:27.974 { 00:15:27.974 "code": -17, 00:15:27.974 "message": "File exists" 00:15:27.974 } 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.974 22:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.908 [2024-07-15 22:26:42.498691] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:28.908 [2024-07-15 22:26:42.498918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c8c1a0 with addr=10.0.0.2, port=8010 00:15:28.908 [2024-07-15 22:26:42.498950] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:28.908 [2024-07-15 22:26:42.498961] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:28.908 [2024-07-15 22:26:42.498971] bdev_nvme.c:7050:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:30.298 [2024-07-15 22:26:43.497058] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:30.298 [2024-07-15 22:26:43.497114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c8c1a0 with addr=10.0.0.2, port=8010 00:15:30.298 [2024-07-15 22:26:43.497135] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:30.298 [2024-07-15 22:26:43.497144] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:30.298 [2024-07-15 22:26:43.497153] bdev_nvme.c:7050:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:30.865 [2024-07-15 22:26:44.495318] bdev_nvme.c:7031:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:15:31.124 request: 00:15:31.124 { 00:15:31.124 "name": "nvme_second", 00:15:31.124 "trtype": "tcp", 00:15:31.124 "traddr": "10.0.0.2", 00:15:31.124 "adrfam": "ipv4", 00:15:31.124 "trsvcid": "8010", 00:15:31.124 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:31.124 "wait_for_attach": false, 00:15:31.124 "attach_timeout_ms": 3000, 00:15:31.124 "method": "bdev_nvme_start_discovery", 00:15:31.124 "req_id": 1 00:15:31.124 } 00:15:31.124 Got JSON-RPC error response 00:15:31.124 response: 00:15:31.124 { 00:15:31.124 "code": -110, 00:15:31.124 "message": "Connection timed out" 00:15:31.124 } 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76233 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:31.124 rmmod nvme_tcp 00:15:31.124 rmmod nvme_fabrics 00:15:31.124 rmmod nvme_keyring 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 76201 ']' 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 76201 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 76201 ']' 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 76201 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76201 00:15:31.124 killing process with pid 76201 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76201' 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 76201 00:15:31.124 22:26:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 76201 00:15:31.382 22:26:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:31.382 22:26:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:31.382 22:26:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:31.383 22:26:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:31.383 22:26:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:31.383 22:26:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.383 22:26:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.383 22:26:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.383 22:26:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:31.383 00:15:31.383 real 0m9.626s 00:15:31.383 user 0m17.838s 00:15:31.383 sys 0m2.420s 00:15:31.383 22:26:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:31.383 22:26:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.383 ************************************ 00:15:31.383 END TEST nvmf_host_discovery 00:15:31.383 ************************************ 00:15:31.383 22:26:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:31.383 22:26:44 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:31.383 22:26:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:31.383 22:26:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:31.383 22:26:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:31.383 ************************************ 00:15:31.383 START TEST nvmf_host_multipath_status 00:15:31.383 ************************************ 00:15:31.383 22:26:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:31.642 * Looking for test storage... 00:15:31.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:31.642 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:31.642 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:15:31.642 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.642 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.642 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.642 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.642 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.642 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.642 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.642 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.642 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.642 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.642 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:15:31.642 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:15:31.642 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.642 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.642 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:31.642 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.642 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:31.642 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:31.643 Cannot find device "nvmf_tgt_br" 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:31.643 Cannot find device "nvmf_tgt_br2" 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:31.643 Cannot find device "nvmf_tgt_br" 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:31.643 Cannot find device "nvmf_tgt_br2" 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:31.643 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:31.903 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:31.903 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:31.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:31.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:15:31.903 00:15:31.903 --- 10.0.0.2 ping statistics --- 00:15:31.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.903 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:31.903 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:31.903 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:15:31.903 00:15:31.903 --- 10.0.0.3 ping statistics --- 00:15:31.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.903 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:31.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:31.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:15:31.903 00:15:31.903 --- 10.0.0.1 ping statistics --- 00:15:31.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.903 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=76676 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 76676 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76676 ']' 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:31.903 22:26:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:32.162 [2024-07-15 22:26:45.555772] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:15:32.162 [2024-07-15 22:26:45.556238] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.162 [2024-07-15 22:26:45.701767] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:32.162 [2024-07-15 22:26:45.793180] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.162 [2024-07-15 22:26:45.793229] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.162 [2024-07-15 22:26:45.793238] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.162 [2024-07-15 22:26:45.793247] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.162 [2024-07-15 22:26:45.793253] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.162 [2024-07-15 22:26:45.793708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.162 [2024-07-15 22:26:45.793708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.422 [2024-07-15 22:26:45.835454] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:32.988 22:26:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:32.988 22:26:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:15:32.988 22:26:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:32.988 22:26:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:32.988 22:26:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:32.988 22:26:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.988 22:26:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76676 00:15:32.988 22:26:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:33.246 [2024-07-15 22:26:46.632909] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.246 22:26:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:33.246 Malloc0 00:15:33.246 22:26:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:15:33.505 22:26:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:33.764 22:26:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:34.022 [2024-07-15 22:26:47.407758] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.022 22:26:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:34.022 [2024-07-15 22:26:47.575560] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:34.022 22:26:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76725 00:15:34.022 22:26:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:15:34.022 22:26:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:34.022 22:26:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76725 /var/tmp/bdevperf.sock 00:15:34.022 22:26:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76725 ']' 00:15:34.022 22:26:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:34.022 22:26:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:34.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:34.022 22:26:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:34.022 22:26:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:34.022 22:26:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:34.953 22:26:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:34.954 22:26:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:15:34.954 22:26:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:15:35.211 22:26:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:15:35.469 Nvme0n1 00:15:35.469 22:26:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:35.732 Nvme0n1 00:15:35.732 22:26:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:15:35.732 22:26:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:15:37.629 22:26:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:15:37.629 22:26:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:37.887 22:26:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:38.143 22:26:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:15:39.082 22:26:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:15:39.082 22:26:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:39.082 22:26:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:39.082 22:26:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:39.339 22:26:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:39.339 22:26:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:39.339 22:26:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:39.339 22:26:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:39.596 22:26:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:39.596 22:26:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:39.596 22:26:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:39.596 22:26:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:39.596 22:26:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:39.596 22:26:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:39.596 22:26:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:39.596 22:26:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:39.853 22:26:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:39.853 22:26:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:39.853 22:26:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:39.853 22:26:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:40.109 22:26:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:40.109 22:26:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:40.109 22:26:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:40.109 22:26:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:40.367 22:26:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:40.367 22:26:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:15:40.367 22:26:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:40.625 22:26:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:40.625 22:26:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:15:41.998 22:26:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:15:41.998 22:26:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:41.998 22:26:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:41.998 22:26:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:41.998 22:26:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:41.998 22:26:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:41.998 22:26:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:41.998 22:26:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:41.998 22:26:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:41.998 22:26:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:41.998 22:26:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:41.998 22:26:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:42.257 22:26:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:42.257 22:26:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:42.257 22:26:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:42.257 22:26:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:42.515 22:26:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:42.515 22:26:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:42.515 22:26:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:42.515 22:26:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:42.773 22:26:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:42.773 22:26:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:42.773 22:26:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:42.773 22:26:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:42.773 22:26:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:42.773 22:26:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:15:42.773 22:26:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:43.032 22:26:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:15:43.290 22:26:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:15:44.223 22:26:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:15:44.223 22:26:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:44.224 22:26:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:44.224 22:26:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:44.482 22:26:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:44.482 22:26:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:44.482 22:26:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:44.482 22:26:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:44.741 22:26:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:44.741 22:26:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:44.741 22:26:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:44.741 22:26:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:44.999 22:26:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:44.999 22:26:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:44.999 22:26:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:44.999 22:26:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:44.999 22:26:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:44.999 22:26:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:44.999 22:26:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:44.999 22:26:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:45.256 22:26:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:45.256 22:26:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:45.256 22:26:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:45.256 22:26:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:45.514 22:26:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:45.514 22:26:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:15:45.514 22:26:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:45.771 22:26:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:45.771 22:26:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:15:47.146 22:27:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:15:47.146 22:27:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:47.146 22:27:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:47.146 22:27:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:47.146 22:27:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:47.146 22:27:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:47.146 22:27:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:47.146 22:27:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:47.405 22:27:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:47.405 22:27:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:47.405 22:27:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:47.405 22:27:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:47.405 22:27:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:47.405 22:27:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:47.405 22:27:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:47.405 22:27:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:47.663 22:27:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:47.663 22:27:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:47.663 22:27:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:47.663 22:27:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:47.922 22:27:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:47.922 22:27:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:47.922 22:27:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:47.922 22:27:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:47.922 22:27:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:47.922 22:27:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:15:47.922 22:27:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:48.180 22:27:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:48.438 22:27:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:15:49.373 22:27:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:15:49.373 22:27:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:49.373 22:27:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:49.373 22:27:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:49.631 22:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:49.631 22:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:49.631 22:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:49.631 22:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:49.889 22:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:49.889 22:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:49.889 22:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:49.889 22:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:50.148 22:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:50.148 22:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:50.148 22:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:50.148 22:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:50.148 22:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:50.148 22:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:15:50.148 22:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:50.148 22:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:50.406 22:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:50.406 22:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:50.406 22:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:50.406 22:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:50.664 22:27:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:50.664 22:27:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:15:50.664 22:27:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:50.922 22:27:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:51.179 22:27:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:15:52.114 22:27:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:15:52.114 22:27:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:52.114 22:27:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:52.114 22:27:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:52.371 22:27:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:52.371 22:27:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:52.371 22:27:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:52.371 22:27:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:52.629 22:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:52.629 22:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:52.629 22:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:52.629 22:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:52.629 22:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:52.629 22:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:52.629 22:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:52.629 22:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:52.887 22:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:52.887 22:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:15:52.887 22:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:52.887 22:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:53.144 22:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:53.144 22:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:53.144 22:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:53.145 22:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:53.402 22:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:53.402 22:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:15:53.402 22:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:15:53.402 22:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:53.660 22:27:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:53.918 22:27:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:15:54.892 22:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:15:54.892 22:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:54.892 22:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:54.892 22:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:55.173 22:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:55.173 22:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:55.173 22:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.173 22:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:55.173 22:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:55.173 22:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:55.173 22:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.173 22:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:55.432 22:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:55.432 22:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:55.432 22:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:55.432 22:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.689 22:27:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:55.689 22:27:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:55.689 22:27:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.690 22:27:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:55.948 22:27:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:55.948 22:27:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:55.948 22:27:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.948 22:27:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:55.948 22:27:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:55.948 22:27:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:15:55.948 22:27:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:56.207 22:27:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:56.466 22:27:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:15:57.401 22:27:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:15:57.401 22:27:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:57.401 22:27:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:57.401 22:27:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:57.659 22:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:57.659 22:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:57.659 22:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:57.659 22:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:57.917 22:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:57.917 22:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:57.917 22:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:57.917 22:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:57.917 22:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:57.917 22:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:57.917 22:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:57.917 22:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.176 22:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.176 22:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:58.176 22:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.176 22:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:58.434 22:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.434 22:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:58.434 22:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.434 22:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:58.693 22:27:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.693 22:27:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:15:58.693 22:27:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:58.693 22:27:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:15:58.951 22:27:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:15:59.884 22:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:15:59.884 22:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:59.884 22:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:59.884 22:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:00.143 22:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:00.143 22:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:00.143 22:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:00.143 22:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:00.402 22:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:00.402 22:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:00.402 22:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:00.402 22:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:00.660 22:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:00.660 22:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:00.660 22:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:00.660 22:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:00.920 22:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:00.920 22:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:00.920 22:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:00.920 22:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:00.920 22:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:00.920 22:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:00.920 22:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:00.920 22:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:01.179 22:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.179 22:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:01.179 22:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:01.473 22:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:01.753 22:27:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:02.689 22:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:02.689 22:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:02.689 22:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:02.689 22:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:02.948 22:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:02.948 22:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:02.948 22:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:02.948 22:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:03.207 22:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:03.207 22:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:03.207 22:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:03.207 22:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:03.466 22:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:03.466 22:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:03.466 22:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:03.466 22:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:03.723 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:03.723 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:03.724 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:03.724 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:03.724 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:03.724 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:03.724 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:03.724 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:03.982 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:03.982 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76725 00:16:03.982 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76725 ']' 00:16:03.982 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76725 00:16:03.982 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:16:03.982 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:03.982 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76725 00:16:03.982 killing process with pid 76725 00:16:03.982 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:03.982 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:03.982 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76725' 00:16:03.982 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76725 00:16:03.982 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76725 00:16:04.249 Connection closed with partial response: 00:16:04.249 00:16:04.249 00:16:04.249 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76725 00:16:04.249 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:04.249 [2024-07-15 22:26:47.624947] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:16:04.249 [2024-07-15 22:26:47.625025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76725 ] 00:16:04.249 [2024-07-15 22:26:47.767965] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.249 [2024-07-15 22:26:47.861961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:04.249 [2024-07-15 22:26:47.903002] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:04.249 Running I/O for 90 seconds... 00:16:04.249 [2024-07-15 22:27:01.736239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.249 [2024-07-15 22:27:01.736318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:04.249 [2024-07-15 22:27:01.736348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.249 [2024-07-15 22:27:01.736361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:04.249 [2024-07-15 22:27:01.736379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.249 [2024-07-15 22:27:01.736391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:04.249 [2024-07-15 22:27:01.736409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.249 [2024-07-15 22:27:01.736421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:04.249 [2024-07-15 22:27:01.736439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.249 [2024-07-15 22:27:01.736451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:04.249 [2024-07-15 22:27:01.736468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.249 [2024-07-15 22:27:01.736480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:04.249 [2024-07-15 22:27:01.736498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.249 [2024-07-15 22:27:01.736510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:04.249 [2024-07-15 22:27:01.736527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.250 [2024-07-15 22:27:01.736540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.736557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.250 [2024-07-15 22:27:01.736569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.736587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.250 [2024-07-15 22:27:01.736611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.736629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.250 [2024-07-15 22:27:01.736659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.736677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.250 [2024-07-15 22:27:01.736690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.736708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.250 [2024-07-15 22:27:01.736720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.736738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.250 [2024-07-15 22:27:01.736750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.736768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.250 [2024-07-15 22:27:01.736780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.736798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.250 [2024-07-15 22:27:01.736810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.736831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.250 [2024-07-15 22:27:01.736844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.736862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.250 [2024-07-15 22:27:01.736875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.736892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.250 [2024-07-15 22:27:01.736904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.736922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.250 [2024-07-15 22:27:01.736934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.736952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.250 [2024-07-15 22:27:01.736964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.736981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.250 [2024-07-15 22:27:01.736994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.737011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.250 [2024-07-15 22:27:01.737031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.737050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.250 [2024-07-15 22:27:01.737062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.737081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.250 [2024-07-15 22:27:01.737094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.737111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.250 [2024-07-15 22:27:01.737123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.737141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.250 [2024-07-15 22:27:01.737153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.737171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.250 [2024-07-15 22:27:01.737183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.737201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.250 [2024-07-15 22:27:01.737213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.737231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.250 [2024-07-15 22:27:01.737244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.737261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.250 [2024-07-15 22:27:01.737273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.737292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.250 [2024-07-15 22:27:01.737304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.737322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.250 [2024-07-15 22:27:01.737334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.737360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.250 [2024-07-15 22:27:01.737373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.737391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.250 [2024-07-15 22:27:01.737403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.737426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.250 [2024-07-15 22:27:01.737439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.737457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.250 [2024-07-15 22:27:01.737470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.737488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.250 [2024-07-15 22:27:01.737501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.737519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.250 [2024-07-15 22:27:01.737531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:04.250 [2024-07-15 22:27:01.737555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.250 [2024-07-15 22:27:01.737568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.737586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.251 [2024-07-15 22:27:01.737606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.737625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.251 [2024-07-15 22:27:01.737637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.737655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.251 [2024-07-15 22:27:01.737667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.737685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.251 [2024-07-15 22:27:01.737698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.737716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.251 [2024-07-15 22:27:01.737729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.737747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.251 [2024-07-15 22:27:01.737759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.737777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.251 [2024-07-15 22:27:01.737789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.737812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.251 [2024-07-15 22:27:01.737825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.737858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.251 [2024-07-15 22:27:01.737872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.737890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.251 [2024-07-15 22:27:01.737903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.737921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.251 [2024-07-15 22:27:01.737934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.737952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.251 [2024-07-15 22:27:01.737964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.737982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.251 [2024-07-15 22:27:01.737994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.738012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.251 [2024-07-15 22:27:01.738024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.738042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.251 [2024-07-15 22:27:01.738054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.738074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.251 [2024-07-15 22:27:01.738086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.738104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.251 [2024-07-15 22:27:01.738117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.738134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.251 [2024-07-15 22:27:01.738147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.738165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.251 [2024-07-15 22:27:01.738178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.738195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.251 [2024-07-15 22:27:01.738213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.738231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.251 [2024-07-15 22:27:01.738243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.738261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.251 [2024-07-15 22:27:01.738273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.738291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.251 [2024-07-15 22:27:01.738303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.738321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.251 [2024-07-15 22:27:01.738334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.738352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.251 [2024-07-15 22:27:01.738364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.738382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.251 [2024-07-15 22:27:01.738396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.738414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.251 [2024-07-15 22:27:01.738426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.738444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.251 [2024-07-15 22:27:01.738456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.738474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.251 [2024-07-15 22:27:01.738486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.738504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.251 [2024-07-15 22:27:01.738516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.738534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.251 [2024-07-15 22:27:01.738547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.738566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.251 [2024-07-15 22:27:01.738583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:04.251 [2024-07-15 22:27:01.738612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.252 [2024-07-15 22:27:01.738625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.738643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.252 [2024-07-15 22:27:01.738656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.738673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.252 [2024-07-15 22:27:01.738686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.738704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.252 [2024-07-15 22:27:01.738717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.738734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.252 [2024-07-15 22:27:01.738747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.738765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.252 [2024-07-15 22:27:01.738777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.738795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.252 [2024-07-15 22:27:01.738807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.738825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.252 [2024-07-15 22:27:01.738837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.738855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.252 [2024-07-15 22:27:01.738867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.738887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.252 [2024-07-15 22:27:01.738900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.738918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.252 [2024-07-15 22:27:01.738930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.738948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.252 [2024-07-15 22:27:01.738960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.738983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.252 [2024-07-15 22:27:01.738996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.739014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.252 [2024-07-15 22:27:01.739026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.739044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.252 [2024-07-15 22:27:01.739057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.739076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.252 [2024-07-15 22:27:01.739089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.739107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.252 [2024-07-15 22:27:01.739120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.739137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.252 [2024-07-15 22:27:01.739150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.739168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.252 [2024-07-15 22:27:01.739180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.739198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.252 [2024-07-15 22:27:01.739210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.739228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.252 [2024-07-15 22:27:01.739241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.739259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.252 [2024-07-15 22:27:01.739271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.739289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.252 [2024-07-15 22:27:01.739302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.739319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.252 [2024-07-15 22:27:01.739332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.739358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.252 [2024-07-15 22:27:01.739370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.739390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.252 [2024-07-15 22:27:01.739402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.739420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.252 [2024-07-15 22:27:01.739432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.739450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.252 [2024-07-15 22:27:01.739463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.739480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.252 [2024-07-15 22:27:01.739493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.739511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.252 [2024-07-15 22:27:01.739524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.739541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.252 [2024-07-15 22:27:01.739554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.739571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.252 [2024-07-15 22:27:01.739584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.739613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.252 [2024-07-15 22:27:01.739626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:04.252 [2024-07-15 22:27:01.739644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.253 [2024-07-15 22:27:01.739657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.739675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.253 [2024-07-15 22:27:01.739687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.739706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.253 [2024-07-15 22:27:01.739718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.739742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.253 [2024-07-15 22:27:01.739755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.739772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.253 [2024-07-15 22:27:01.739785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.739803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.253 [2024-07-15 22:27:01.739815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.739833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.253 [2024-07-15 22:27:01.739846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.739863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.253 [2024-07-15 22:27:01.739876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.739895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.253 [2024-07-15 22:27:01.739908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.739926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.253 [2024-07-15 22:27:01.739938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.739956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.253 [2024-07-15 22:27:01.739969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.739987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.253 [2024-07-15 22:27:01.739999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.740017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.253 [2024-07-15 22:27:01.740029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.740047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.253 [2024-07-15 22:27:01.740060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.741099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.253 [2024-07-15 22:27:01.741127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.741150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.253 [2024-07-15 22:27:01.741172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.741190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.253 [2024-07-15 22:27:01.741203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.741220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.253 [2024-07-15 22:27:01.741240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.741258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.253 [2024-07-15 22:27:01.741271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.741289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.253 [2024-07-15 22:27:01.741301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.741319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.253 [2024-07-15 22:27:01.741332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.741359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.253 [2024-07-15 22:27:01.741372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.741399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.253 [2024-07-15 22:27:01.741413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.741431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.253 [2024-07-15 22:27:01.741444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.741463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.253 [2024-07-15 22:27:01.741476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.741493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.253 [2024-07-15 22:27:01.741506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.741523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.253 [2024-07-15 22:27:01.741536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.741553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.253 [2024-07-15 22:27:01.741572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.741591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.253 [2024-07-15 22:27:01.741613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.741632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.253 [2024-07-15 22:27:01.741644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.741841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.253 [2024-07-15 22:27:01.741859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:04.253 [2024-07-15 22:27:01.741878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.253 [2024-07-15 22:27:01.741891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.741909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.254 [2024-07-15 22:27:01.741921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.741939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.254 [2024-07-15 22:27:01.741952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.741970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.254 [2024-07-15 22:27:01.741983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.742000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.254 [2024-07-15 22:27:01.742013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.742031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.254 [2024-07-15 22:27:01.742043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.742061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.254 [2024-07-15 22:27:01.742073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.742091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.254 [2024-07-15 22:27:01.742103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.742121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.254 [2024-07-15 22:27:01.742133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.742160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.254 [2024-07-15 22:27:01.742173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.742191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.254 [2024-07-15 22:27:01.742203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.742221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.254 [2024-07-15 22:27:01.742233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.742251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.254 [2024-07-15 22:27:01.742264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.742282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.254 [2024-07-15 22:27:01.742295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.742313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.254 [2024-07-15 22:27:01.742326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.742345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.254 [2024-07-15 22:27:01.742359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.742376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.254 [2024-07-15 22:27:01.742389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.742407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.254 [2024-07-15 22:27:01.742419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.742437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.254 [2024-07-15 22:27:01.742449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.742467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.254 [2024-07-15 22:27:01.742479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.742497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.254 [2024-07-15 22:27:01.742509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.742532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.254 [2024-07-15 22:27:01.742545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.742563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.254 [2024-07-15 22:27:01.742575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.742611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.254 [2024-07-15 22:27:01.742625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.742643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.254 [2024-07-15 22:27:01.742655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.742675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.254 [2024-07-15 22:27:01.742687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.742705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.254 [2024-07-15 22:27:01.742718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.742736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.254 [2024-07-15 22:27:01.742748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.742765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.254 [2024-07-15 22:27:01.742778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:04.254 [2024-07-15 22:27:01.742798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.255 [2024-07-15 22:27:01.742810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.742828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.255 [2024-07-15 22:27:01.742840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.742858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.255 [2024-07-15 22:27:01.742871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.742888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.255 [2024-07-15 22:27:01.742900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.742918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.255 [2024-07-15 22:27:01.742936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.742954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.255 [2024-07-15 22:27:01.742966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.742984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.255 [2024-07-15 22:27:01.742997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.743015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.255 [2024-07-15 22:27:01.743027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.743045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.255 [2024-07-15 22:27:01.743057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.743075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.255 [2024-07-15 22:27:01.743088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.743108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.255 [2024-07-15 22:27:01.743121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.743138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.255 [2024-07-15 22:27:01.743151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.743170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.255 [2024-07-15 22:27:01.743182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.743200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.255 [2024-07-15 22:27:01.743213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.743230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.255 [2024-07-15 22:27:01.743243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.743260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.255 [2024-07-15 22:27:01.743273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.743292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.255 [2024-07-15 22:27:01.743309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.743327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.255 [2024-07-15 22:27:01.743339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.743357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.255 [2024-07-15 22:27:01.743369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.743387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.255 [2024-07-15 22:27:01.743399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.743417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.255 [2024-07-15 22:27:01.743429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.743447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.255 [2024-07-15 22:27:01.743459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.743477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.255 [2024-07-15 22:27:01.743490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.743507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.255 [2024-07-15 22:27:01.743520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.743537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.255 [2024-07-15 22:27:01.757125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.757214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.255 [2024-07-15 22:27:01.757235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.757260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.255 [2024-07-15 22:27:01.757277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:04.255 [2024-07-15 22:27:01.757301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.255 [2024-07-15 22:27:01.757318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.757359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.256 [2024-07-15 22:27:01.757377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.757422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.256 [2024-07-15 22:27:01.757439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.757463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.256 [2024-07-15 22:27:01.757480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.757504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.256 [2024-07-15 22:27:01.757521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.757545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.256 [2024-07-15 22:27:01.757562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.757587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.256 [2024-07-15 22:27:01.757619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.757650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.256 [2024-07-15 22:27:01.757668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.757692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.256 [2024-07-15 22:27:01.757708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.757732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.256 [2024-07-15 22:27:01.757749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.757772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.256 [2024-07-15 22:27:01.757789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.757813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.256 [2024-07-15 22:27:01.757830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.757853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.256 [2024-07-15 22:27:01.757870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.757893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.256 [2024-07-15 22:27:01.757910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.757942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.256 [2024-07-15 22:27:01.757959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.757983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.256 [2024-07-15 22:27:01.758000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.758024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.256 [2024-07-15 22:27:01.758040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.758065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.256 [2024-07-15 22:27:01.758081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.758105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.256 [2024-07-15 22:27:01.758122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.758146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.256 [2024-07-15 22:27:01.758162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.758186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.256 [2024-07-15 22:27:01.758202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.758226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.256 [2024-07-15 22:27:01.758243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.758267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.256 [2024-07-15 22:27:01.758284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.758307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.256 [2024-07-15 22:27:01.758324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.758347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.256 [2024-07-15 22:27:01.758364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.758387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.256 [2024-07-15 22:27:01.758404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.758428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.256 [2024-07-15 22:27:01.758451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.758475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.256 [2024-07-15 22:27:01.758492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.758516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.256 [2024-07-15 22:27:01.758533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.758557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.256 [2024-07-15 22:27:01.758573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.758613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.256 [2024-07-15 22:27:01.758631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.758655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.256 [2024-07-15 22:27:01.758672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:04.256 [2024-07-15 22:27:01.758696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.256 [2024-07-15 22:27:01.758712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.758736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.257 [2024-07-15 22:27:01.758753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.758776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.257 [2024-07-15 22:27:01.758793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.758817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.257 [2024-07-15 22:27:01.758833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.758857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.257 [2024-07-15 22:27:01.758874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.758898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.257 [2024-07-15 22:27:01.758915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.758939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.257 [2024-07-15 22:27:01.758958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.758985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.257 [2024-07-15 22:27:01.759002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.759026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.257 [2024-07-15 22:27:01.759042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.759066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.257 [2024-07-15 22:27:01.759083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.759107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.257 [2024-07-15 22:27:01.759124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.759147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.257 [2024-07-15 22:27:01.759164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.759187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.257 [2024-07-15 22:27:01.759204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.759229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.257 [2024-07-15 22:27:01.759245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.759269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.257 [2024-07-15 22:27:01.759286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.759309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.257 [2024-07-15 22:27:01.759326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.759350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.257 [2024-07-15 22:27:01.759366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.759390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.257 [2024-07-15 22:27:01.759406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.759430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.257 [2024-07-15 22:27:01.759447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.759476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.257 [2024-07-15 22:27:01.759493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.759517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.257 [2024-07-15 22:27:01.759534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.759558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.257 [2024-07-15 22:27:01.759575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.759609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.257 [2024-07-15 22:27:01.759627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.759651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.257 [2024-07-15 22:27:01.759667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.759691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.257 [2024-07-15 22:27:01.759708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.759731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.257 [2024-07-15 22:27:01.759748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.759772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.257 [2024-07-15 22:27:01.759788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.759812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.257 [2024-07-15 22:27:01.759828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.759852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.257 [2024-07-15 22:27:01.759868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.759892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.257 [2024-07-15 22:27:01.759909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.759933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.257 [2024-07-15 22:27:01.759949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.759979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.257 [2024-07-15 22:27:01.759996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.760020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.257 [2024-07-15 22:27:01.760036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.760060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.257 [2024-07-15 22:27:01.760077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:04.257 [2024-07-15 22:27:01.760101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.258 [2024-07-15 22:27:01.760118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.760141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.258 [2024-07-15 22:27:01.760158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.760182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.258 [2024-07-15 22:27:01.760199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.760222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.258 [2024-07-15 22:27:01.760239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.761872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.258 [2024-07-15 22:27:01.761904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.761936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.258 [2024-07-15 22:27:01.761953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.761977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.258 [2024-07-15 22:27:01.761994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.762018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.258 [2024-07-15 22:27:01.762034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.762058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.258 [2024-07-15 22:27:01.762074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.762098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.258 [2024-07-15 22:27:01.762130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.762154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.258 [2024-07-15 22:27:01.762171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.762215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.258 [2024-07-15 22:27:01.762237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.762269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.258 [2024-07-15 22:27:01.762292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.762324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.258 [2024-07-15 22:27:01.762347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.762379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.258 [2024-07-15 22:27:01.762401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.762434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.258 [2024-07-15 22:27:01.762456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.762488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.258 [2024-07-15 22:27:01.762511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.762543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.258 [2024-07-15 22:27:01.762565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.762598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.258 [2024-07-15 22:27:01.762636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.762674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.258 [2024-07-15 22:27:01.762697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.762730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.258 [2024-07-15 22:27:01.762752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.762784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.258 [2024-07-15 22:27:01.762820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.762854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.258 [2024-07-15 22:27:01.762876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.762909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.258 [2024-07-15 22:27:01.762931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.762963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.258 [2024-07-15 22:27:01.762986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.763018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.258 [2024-07-15 22:27:01.763041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.763073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.258 [2024-07-15 22:27:01.763095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.763128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.258 [2024-07-15 22:27:01.763150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.763184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.258 [2024-07-15 22:27:01.763206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.763239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.258 [2024-07-15 22:27:01.763261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.763293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.258 [2024-07-15 22:27:01.763316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.763348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.258 [2024-07-15 22:27:01.763370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.763403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.258 [2024-07-15 22:27:01.763425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.763458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.258 [2024-07-15 22:27:01.763480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:04.258 [2024-07-15 22:27:01.763521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.259 [2024-07-15 22:27:01.763544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.763576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.259 [2024-07-15 22:27:01.763616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.763649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.259 [2024-07-15 22:27:01.763672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.763704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.259 [2024-07-15 22:27:01.763727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.763759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.259 [2024-07-15 22:27:01.763782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.763814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.259 [2024-07-15 22:27:01.763837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.763869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.259 [2024-07-15 22:27:01.763892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.763924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.259 [2024-07-15 22:27:01.763947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.763979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.259 [2024-07-15 22:27:01.764002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.764039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.259 [2024-07-15 22:27:01.764062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.764094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.259 [2024-07-15 22:27:01.764117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.764150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.259 [2024-07-15 22:27:01.764172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.764213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.259 [2024-07-15 22:27:01.764269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.764302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.259 [2024-07-15 22:27:01.764324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.764356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.259 [2024-07-15 22:27:01.764379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.764411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.259 [2024-07-15 22:27:01.764434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.764466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.259 [2024-07-15 22:27:01.764490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.764522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.259 [2024-07-15 22:27:01.764545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.764578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.259 [2024-07-15 22:27:01.764615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.764648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.259 [2024-07-15 22:27:01.764671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.764703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.259 [2024-07-15 22:27:01.764726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.764758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.259 [2024-07-15 22:27:01.764780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.764812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.259 [2024-07-15 22:27:01.764835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.764868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.259 [2024-07-15 22:27:01.764890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.764923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.259 [2024-07-15 22:27:01.764954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.764987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.259 [2024-07-15 22:27:01.765010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.765042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.259 [2024-07-15 22:27:01.765064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.765097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.259 [2024-07-15 22:27:01.765119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.765151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.259 [2024-07-15 22:27:01.765174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.765206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.259 [2024-07-15 22:27:01.765229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.765261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.259 [2024-07-15 22:27:01.765284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.765316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.259 [2024-07-15 22:27:01.765338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:04.259 [2024-07-15 22:27:01.765386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.259 [2024-07-15 22:27:01.765408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.765441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.260 [2024-07-15 22:27:01.765463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.765495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.260 [2024-07-15 22:27:01.765518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.765550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.260 [2024-07-15 22:27:01.765573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.765618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.260 [2024-07-15 22:27:01.765651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.765683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.260 [2024-07-15 22:27:01.765706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.765738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.260 [2024-07-15 22:27:01.765761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.765793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.260 [2024-07-15 22:27:01.765816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.765848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.260 [2024-07-15 22:27:01.765871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.765903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.260 [2024-07-15 22:27:01.765925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.765958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.260 [2024-07-15 22:27:01.765980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.766012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.260 [2024-07-15 22:27:01.766035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.766067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.260 [2024-07-15 22:27:01.766090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.766122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.260 [2024-07-15 22:27:01.766144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.766176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.260 [2024-07-15 22:27:01.766199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.766231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.260 [2024-07-15 22:27:01.766254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.766286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.260 [2024-07-15 22:27:01.766309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.766355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.260 [2024-07-15 22:27:01.766378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.766410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.260 [2024-07-15 22:27:01.766433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.766465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.260 [2024-07-15 22:27:01.766488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.766520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.260 [2024-07-15 22:27:01.766543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.766575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.260 [2024-07-15 22:27:01.766609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.766643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.260 [2024-07-15 22:27:01.766665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.766698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.260 [2024-07-15 22:27:01.766721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.766753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.260 [2024-07-15 22:27:01.766776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.766808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.260 [2024-07-15 22:27:01.766831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.766863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.260 [2024-07-15 22:27:01.766886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:04.260 [2024-07-15 22:27:01.766918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.260 [2024-07-15 22:27:01.766941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.766973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.261 [2024-07-15 22:27:01.766996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.767036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.261 [2024-07-15 22:27:01.767059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.767092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.261 [2024-07-15 22:27:01.767114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.767146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.261 [2024-07-15 22:27:01.767169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.767201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.261 [2024-07-15 22:27:01.767224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.767256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.261 [2024-07-15 22:27:01.767279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.767311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.261 [2024-07-15 22:27:01.767334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.767366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.261 [2024-07-15 22:27:01.767389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.767422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.261 [2024-07-15 22:27:01.767445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.767478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.261 [2024-07-15 22:27:01.767500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.767533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.261 [2024-07-15 22:27:01.767555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.767588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.261 [2024-07-15 22:27:01.767624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.767657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.261 [2024-07-15 22:27:01.767680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.767713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.261 [2024-07-15 22:27:01.767744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.767776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.261 [2024-07-15 22:27:01.767799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.767832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.261 [2024-07-15 22:27:01.767854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.767887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.261 [2024-07-15 22:27:01.767909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.767941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.261 [2024-07-15 22:27:01.767964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.767996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.261 [2024-07-15 22:27:01.768018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.768051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.261 [2024-07-15 22:27:01.768073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.768105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.261 [2024-07-15 22:27:01.768128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.768160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.261 [2024-07-15 22:27:01.768183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.768215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.261 [2024-07-15 22:27:01.768237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.768270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.261 [2024-07-15 22:27:01.768293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.768326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.261 [2024-07-15 22:27:01.768348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.768381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.261 [2024-07-15 22:27:01.768411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.768444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.261 [2024-07-15 22:27:01.768466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.768499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.261 [2024-07-15 22:27:01.768521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.768553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.261 [2024-07-15 22:27:01.768576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.768619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.261 [2024-07-15 22:27:01.768643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:04.261 [2024-07-15 22:27:01.768675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.262 [2024-07-15 22:27:01.768698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.768730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.262 [2024-07-15 22:27:01.768753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.768785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.262 [2024-07-15 22:27:01.768807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.768839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.262 [2024-07-15 22:27:01.768862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.768894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.262 [2024-07-15 22:27:01.768917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.768949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.262 [2024-07-15 22:27:01.768971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.769004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.262 [2024-07-15 22:27:01.769026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.770902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.262 [2024-07-15 22:27:01.770944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.771021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.262 [2024-07-15 22:27:01.771047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.771080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.262 [2024-07-15 22:27:01.771103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.771136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.262 [2024-07-15 22:27:01.771159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.771192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.262 [2024-07-15 22:27:01.771214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.771247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.262 [2024-07-15 22:27:01.771269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.771302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.262 [2024-07-15 22:27:01.771325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.771357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.262 [2024-07-15 22:27:01.771380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.771412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.262 [2024-07-15 22:27:01.771435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.771467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.262 [2024-07-15 22:27:01.771490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.771523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.262 [2024-07-15 22:27:01.771545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.771577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.262 [2024-07-15 22:27:01.771619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.771653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.262 [2024-07-15 22:27:01.771676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.771719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.262 [2024-07-15 22:27:01.771742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.771775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.262 [2024-07-15 22:27:01.771798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.771835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.262 [2024-07-15 22:27:01.771858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.771890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.262 [2024-07-15 22:27:01.771913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.771946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.262 [2024-07-15 22:27:01.771968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.772001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.262 [2024-07-15 22:27:01.772023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.772055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.262 [2024-07-15 22:27:01.772078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.772110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.262 [2024-07-15 22:27:01.772133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.772165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.262 [2024-07-15 22:27:01.772188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.772220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.262 [2024-07-15 22:27:01.772243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.772275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.262 [2024-07-15 22:27:01.772298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.772330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.262 [2024-07-15 22:27:01.772352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.772396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.262 [2024-07-15 22:27:01.772418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:04.262 [2024-07-15 22:27:01.772439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.263 [2024-07-15 22:27:01.772454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.772475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.263 [2024-07-15 22:27:01.772490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.772512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.263 [2024-07-15 22:27:01.772527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.772548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.263 [2024-07-15 22:27:01.772563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.772584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.263 [2024-07-15 22:27:01.772599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.772629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.263 [2024-07-15 22:27:01.772644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.772666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.263 [2024-07-15 22:27:01.772681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.772702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.263 [2024-07-15 22:27:01.772717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.772738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.263 [2024-07-15 22:27:01.772753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.772774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.263 [2024-07-15 22:27:01.772789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.772811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.263 [2024-07-15 22:27:01.772826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.772847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.263 [2024-07-15 22:27:01.772868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.772890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.263 [2024-07-15 22:27:01.772905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.772930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.263 [2024-07-15 22:27:01.772945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.772966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.263 [2024-07-15 22:27:01.772981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.773002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.263 [2024-07-15 22:27:01.773017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.773038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.263 [2024-07-15 22:27:01.773054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.773075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.263 [2024-07-15 22:27:01.773090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.773111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.263 [2024-07-15 22:27:01.773126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.773148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.263 [2024-07-15 22:27:01.773163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.773184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.263 [2024-07-15 22:27:01.773199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.773220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.263 [2024-07-15 22:27:01.773235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.773257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.263 [2024-07-15 22:27:01.773272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.773294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.263 [2024-07-15 22:27:01.773309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.773335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.263 [2024-07-15 22:27:01.773362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.773384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.263 [2024-07-15 22:27:01.773399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.773420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.263 [2024-07-15 22:27:01.773436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.773457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.263 [2024-07-15 22:27:01.773472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.773493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.263 [2024-07-15 22:27:01.773508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.773529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.263 [2024-07-15 22:27:01.773544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.773566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.263 [2024-07-15 22:27:01.773581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.773610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.263 [2024-07-15 22:27:01.773626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.773647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.263 [2024-07-15 22:27:01.773662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:04.263 [2024-07-15 22:27:01.773684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.264 [2024-07-15 22:27:01.773699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.773720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.264 [2024-07-15 22:27:01.773735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.773756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.264 [2024-07-15 22:27:01.773771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.773798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.264 [2024-07-15 22:27:01.773813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.773835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.264 [2024-07-15 22:27:01.773850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.773871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.264 [2024-07-15 22:27:01.773886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.773907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.264 [2024-07-15 22:27:01.773922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.773943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.264 [2024-07-15 22:27:01.773959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.773980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.264 [2024-07-15 22:27:01.773995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.774016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.264 [2024-07-15 22:27:01.774031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.774052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.264 [2024-07-15 22:27:01.774067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.774089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.264 [2024-07-15 22:27:01.774104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.774125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.264 [2024-07-15 22:27:01.774140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.774161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.264 [2024-07-15 22:27:01.774176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.774198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.264 [2024-07-15 22:27:01.774213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.774234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.264 [2024-07-15 22:27:01.774255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.774277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.264 [2024-07-15 22:27:01.774292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.774897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.264 [2024-07-15 22:27:01.774921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.774951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.264 [2024-07-15 22:27:01.774966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.774993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.264 [2024-07-15 22:27:01.775009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.775036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.264 [2024-07-15 22:27:01.775051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.775077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.264 [2024-07-15 22:27:01.775092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.775119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.264 [2024-07-15 22:27:01.775134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.775161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.264 [2024-07-15 22:27:01.775176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.775203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.264 [2024-07-15 22:27:01.775218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.775244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.264 [2024-07-15 22:27:01.775259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.775286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.264 [2024-07-15 22:27:01.775301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.775328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.264 [2024-07-15 22:27:01.775352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.775379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.264 [2024-07-15 22:27:01.775394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.775421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.264 [2024-07-15 22:27:01.775436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.775463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.264 [2024-07-15 22:27:01.775478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.775505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.264 [2024-07-15 22:27:01.775520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.775558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.264 [2024-07-15 22:27:01.775574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:04.264 [2024-07-15 22:27:01.775612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.264 [2024-07-15 22:27:01.775628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:04.265 [2024-07-15 22:27:01.775655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.265 [2024-07-15 22:27:01.775670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:04.265 [2024-07-15 22:27:01.775697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.265 [2024-07-15 22:27:01.775712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:04.265 [2024-07-15 22:27:01.775739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.265 [2024-07-15 22:27:01.775754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:04.265 [2024-07-15 22:27:01.775780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.265 [2024-07-15 22:27:01.775795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:04.265 [2024-07-15 22:27:01.775822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.265 [2024-07-15 22:27:01.775837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:04.265 [2024-07-15 22:27:01.775864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.265 [2024-07-15 22:27:01.775885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:04.265 [2024-07-15 22:27:01.775915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.265 [2024-07-15 22:27:01.775930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:04.265 [2024-07-15 22:27:01.775957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.265 [2024-07-15 22:27:01.775972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:04.265 [2024-07-15 22:27:01.775999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.265 [2024-07-15 22:27:01.776014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:04.265 [2024-07-15 22:27:01.776040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.265 [2024-07-15 22:27:01.776056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:04.265 [2024-07-15 22:27:01.776083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.265 [2024-07-15 22:27:01.776098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:04.265 [2024-07-15 22:27:01.776124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.265 [2024-07-15 22:27:01.776139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:04.265 [2024-07-15 22:27:01.776166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.265 [2024-07-15 22:27:01.776181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:04.265 [2024-07-15 22:27:01.776208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.265 [2024-07-15 22:27:01.776223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:04.265 [2024-07-15 22:27:01.776250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.265 [2024-07-15 22:27:01.776265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:04.265 [2024-07-15 22:27:01.776292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.265 [2024-07-15 22:27:01.776307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:04.265 [2024-07-15 22:27:01.776334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.265 [2024-07-15 22:27:01.776349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:04.265 [2024-07-15 22:27:01.776375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.265 [2024-07-15 22:27:01.776391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:04.265 [2024-07-15 22:27:01.776423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.265 [2024-07-15 22:27:01.776438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:04.265 [2024-07-15 22:27:01.776464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.265 [2024-07-15 22:27:01.776480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:04.265 [2024-07-15 22:27:01.776506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.265 [2024-07-15 22:27:01.776521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:04.265 [2024-07-15 22:27:01.776547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.265 [2024-07-15 22:27:01.776562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:04.265 [2024-07-15 22:27:01.776608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.265 [2024-07-15 22:27:01.776625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:04.265 [2024-07-15 22:27:01.776653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.265 [2024-07-15 22:27:01.776667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:04.265 [2024-07-15 22:27:01.776694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.265 [2024-07-15 22:27:01.776708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:04.265 [2024-07-15 22:27:01.776735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.266 [2024-07-15 22:27:01.776750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:01.776776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.266 [2024-07-15 22:27:01.776791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:01.776818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.266 [2024-07-15 22:27:01.776832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:01.776859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.266 [2024-07-15 22:27:01.776873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:01.776900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.266 [2024-07-15 22:27:01.776915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:01.776948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.266 [2024-07-15 22:27:01.776963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:01.776990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.266 [2024-07-15 22:27:01.777004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:01.777031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.266 [2024-07-15 22:27:01.777045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:01.777072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.266 [2024-07-15 22:27:01.777087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:01.777113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.266 [2024-07-15 22:27:01.777128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:15.147909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.266 [2024-07-15 22:27:15.147976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:15.148039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.266 [2024-07-15 22:27:15.148054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:15.148073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.266 [2024-07-15 22:27:15.148087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:15.148105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.266 [2024-07-15 22:27:15.148118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:15.148137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.266 [2024-07-15 22:27:15.148149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:15.148168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.266 [2024-07-15 22:27:15.148181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:15.148200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.266 [2024-07-15 22:27:15.148213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:15.148232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.266 [2024-07-15 22:27:15.148266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:15.148285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.266 [2024-07-15 22:27:15.148298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:15.148317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.266 [2024-07-15 22:27:15.148330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:15.148348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.266 [2024-07-15 22:27:15.148361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:15.148379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.266 [2024-07-15 22:27:15.148392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:15.148411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.266 [2024-07-15 22:27:15.148424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:15.148442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.266 [2024-07-15 22:27:15.148455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:15.148474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.266 [2024-07-15 22:27:15.148487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:15.148505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.266 [2024-07-15 22:27:15.148518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:15.148536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:71664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.266 [2024-07-15 22:27:15.148549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:15.148568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.266 [2024-07-15 22:27:15.148581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:15.148599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.266 [2024-07-15 22:27:15.148621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:15.148661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.266 [2024-07-15 22:27:15.148684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:15.148703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.266 [2024-07-15 22:27:15.148716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:15.148735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.266 [2024-07-15 22:27:15.148748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:15.148766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.266 [2024-07-15 22:27:15.148779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:04.266 [2024-07-15 22:27:15.148798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.267 [2024-07-15 22:27:15.148810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.148829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.267 [2024-07-15 22:27:15.148842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.148861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.267 [2024-07-15 22:27:15.148873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.148892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:71624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.267 [2024-07-15 22:27:15.148905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.148923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.267 [2024-07-15 22:27:15.148936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.148954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.267 [2024-07-15 22:27:15.148967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.148986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.267 [2024-07-15 22:27:15.148999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.149017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.267 [2024-07-15 22:27:15.149030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.149051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.267 [2024-07-15 22:27:15.149064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.149088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.267 [2024-07-15 22:27:15.149101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.149121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.267 [2024-07-15 22:27:15.149134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.149152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.267 [2024-07-15 22:27:15.149165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.149185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.267 [2024-07-15 22:27:15.149198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.149216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.267 [2024-07-15 22:27:15.149229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.149247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.267 [2024-07-15 22:27:15.149260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.149278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.267 [2024-07-15 22:27:15.149291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.149310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.267 [2024-07-15 22:27:15.149323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.149341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.267 [2024-07-15 22:27:15.149363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.149382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.267 [2024-07-15 22:27:15.149395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.149413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.267 [2024-07-15 22:27:15.149426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.149445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.267 [2024-07-15 22:27:15.149457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.149481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.267 [2024-07-15 22:27:15.149494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.149512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.267 [2024-07-15 22:27:15.149525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.149544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.267 [2024-07-15 22:27:15.149557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.149585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.267 [2024-07-15 22:27:15.149608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.149627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.267 [2024-07-15 22:27:15.149640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.149659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.267 [2024-07-15 22:27:15.149672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.149691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.267 [2024-07-15 22:27:15.149704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.149723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.267 [2024-07-15 22:27:15.149736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.149755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.267 [2024-07-15 22:27:15.149768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:04.267 [2024-07-15 22:27:15.149787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.267 [2024-07-15 22:27:15.149800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:04.268 [2024-07-15 22:27:15.149818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.268 [2024-07-15 22:27:15.149837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:04.268 [2024-07-15 22:27:15.149856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.268 [2024-07-15 22:27:15.149869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:04.268 [2024-07-15 22:27:15.149887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.268 [2024-07-15 22:27:15.149906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:04.268 [2024-07-15 22:27:15.149925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.268 [2024-07-15 22:27:15.149937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:04.268 [2024-07-15 22:27:15.149956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.268 [2024-07-15 22:27:15.149969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:04.268 [2024-07-15 22:27:15.149987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.268 [2024-07-15 22:27:15.150000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:04.268 [2024-07-15 22:27:15.150019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.268 [2024-07-15 22:27:15.150032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:04.268 [2024-07-15 22:27:15.151265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.268 [2024-07-15 22:27:15.151295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:04.268 [2024-07-15 22:27:15.151320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.268 [2024-07-15 22:27:15.151334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:04.268 [2024-07-15 22:27:15.151354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.268 [2024-07-15 22:27:15.151369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:04.268 [2024-07-15 22:27:15.151389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.268 [2024-07-15 22:27:15.151403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:04.268 [2024-07-15 22:27:15.151423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.268 [2024-07-15 22:27:15.151437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:04.268 [2024-07-15 22:27:15.151457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.268 [2024-07-15 22:27:15.151471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:04.268 [2024-07-15 22:27:15.151491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.268 [2024-07-15 22:27:15.151505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:04.268 [2024-07-15 22:27:15.151524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72488 len:8 SGL 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:04.268 DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.268 [2024-07-15 22:27:15.151549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:04.268 [2024-07-15 22:27:15.151569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:04.268 [2024-07-15 22:27:15.151582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:04.268 Received shutdown signal, test time was about 28.285102 seconds 00:16:04.268 00:16:04.268 Latency(us) 00:16:04.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.268 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:04.268 Verification LBA range: start 0x0 length 0x4000 00:16:04.268 Nvme0n1 : 28.28 11216.93 43.82 0.00 0.00 11389.31 330.64 3058978.34 00:16:04.268 =================================================================================================================== 00:16:04.268 Total : 11216.93 43.82 0.00 0.00 11389.31 330.64 3058978.34 00:16:04.527 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:04.527 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:04.527 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:04.527 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:04.527 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:16:04.527 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:04.527 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:16:04.527 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:04.527 22:27:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:04.527 rmmod nvme_tcp 00:16:04.527 rmmod nvme_fabrics 00:16:04.527 rmmod nvme_keyring 00:16:04.527 22:27:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:04.527 22:27:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:16:04.527 22:27:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:16:04.527 22:27:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 76676 ']' 00:16:04.527 22:27:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 76676 00:16:04.527 22:27:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76676 ']' 00:16:04.527 22:27:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76676 00:16:04.527 22:27:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:16:04.527 22:27:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:04.527 22:27:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76676 00:16:04.528 22:27:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:04.528 killing process with pid 76676 00:16:04.528 22:27:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:04.528 22:27:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76676' 00:16:04.528 22:27:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76676 00:16:04.528 22:27:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76676 00:16:04.787 22:27:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:04.787 22:27:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:04.787 22:27:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:04.787 22:27:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:04.787 22:27:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:04.787 22:27:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.787 22:27:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:04.787 22:27:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.787 22:27:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:04.787 00:16:04.787 real 0m33.400s 00:16:04.787 user 1m43.250s 00:16:04.787 sys 0m12.135s 00:16:04.787 22:27:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:04.787 22:27:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:04.787 ************************************ 00:16:04.787 END TEST nvmf_host_multipath_status 00:16:04.787 ************************************ 00:16:05.047 22:27:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:05.047 22:27:18 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:05.047 22:27:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:05.047 22:27:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:05.047 22:27:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:05.047 ************************************ 00:16:05.047 START TEST nvmf_discovery_remove_ifc 00:16:05.047 ************************************ 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:05.047 * Looking for test storage... 00:16:05.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:05.047 Cannot find device "nvmf_tgt_br" 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:05.047 Cannot find device "nvmf_tgt_br2" 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:05.047 Cannot find device "nvmf_tgt_br" 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:16:05.047 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:05.307 Cannot find device "nvmf_tgt_br2" 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:05.307 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:05.307 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:05.307 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:05.565 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:05.565 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:05.565 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:05.565 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:05.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:05.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:16:05.565 00:16:05.565 --- 10.0.0.2 ping statistics --- 00:16:05.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.565 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:16:05.565 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:05.565 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:05.565 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:16:05.565 00:16:05.566 --- 10.0.0.3 ping statistics --- 00:16:05.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.566 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:16:05.566 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:05.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:05.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:16:05.566 00:16:05.566 --- 10.0.0.1 ping statistics --- 00:16:05.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.566 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:16:05.566 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:05.566 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:16:05.566 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:05.566 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:05.566 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:05.566 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:05.566 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:05.566 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:05.566 22:27:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:05.566 22:27:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:05.566 22:27:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:05.566 22:27:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:05.566 22:27:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:05.566 22:27:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:05.566 22:27:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=77466 00:16:05.566 22:27:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 77466 00:16:05.566 22:27:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77466 ']' 00:16:05.566 22:27:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.566 22:27:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:05.566 22:27:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.566 22:27:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:05.566 22:27:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:05.566 [2024-07-15 22:27:19.086244] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:16:05.566 [2024-07-15 22:27:19.086343] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.826 [2024-07-15 22:27:19.236655] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.826 [2024-07-15 22:27:19.330342] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.826 [2024-07-15 22:27:19.330393] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.826 [2024-07-15 22:27:19.330402] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:05.826 [2024-07-15 22:27:19.330410] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:05.826 [2024-07-15 22:27:19.330416] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.826 [2024-07-15 22:27:19.330446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.826 [2024-07-15 22:27:19.371442] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:06.425 22:27:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:06.425 22:27:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:16:06.425 22:27:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:06.425 22:27:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:06.425 22:27:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:06.425 22:27:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.425 22:27:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:06.425 22:27:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.425 22:27:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:06.425 [2024-07-15 22:27:19.994060] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:06.425 [2024-07-15 22:27:20.002154] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:06.425 null0 00:16:06.425 [2024-07-15 22:27:20.034064] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:06.425 22:27:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.425 22:27:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77498 00:16:06.425 22:27:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77498 /tmp/host.sock 00:16:06.425 22:27:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:06.425 22:27:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77498 ']' 00:16:06.425 22:27:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:06.425 22:27:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:06.425 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:06.425 22:27:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:06.425 22:27:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:06.684 22:27:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:06.684 [2024-07-15 22:27:20.107897] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:16:06.684 [2024-07-15 22:27:20.107974] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77498 ] 00:16:06.684 [2024-07-15 22:27:20.237160] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.941 [2024-07-15 22:27:20.331320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.508 22:27:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:07.508 22:27:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:16:07.508 22:27:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:07.508 22:27:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:07.508 22:27:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.508 22:27:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:07.508 22:27:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.508 22:27:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:07.508 22:27:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.508 22:27:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:07.508 [2024-07-15 22:27:21.031409] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:07.508 22:27:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.508 22:27:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:07.508 22:27:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.508 22:27:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:08.445 [2024-07-15 22:27:22.073712] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:08.445 [2024-07-15 22:27:22.073775] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:08.445 [2024-07-15 22:27:22.073798] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:08.703 [2024-07-15 22:27:22.079774] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:08.703 [2024-07-15 22:27:22.138190] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:08.703 [2024-07-15 22:27:22.138341] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:08.703 [2024-07-15 22:27:22.138372] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:08.703 [2024-07-15 22:27:22.138400] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:08.703 [2024-07-15 22:27:22.138435] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:08.703 22:27:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.703 22:27:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:08.703 [2024-07-15 22:27:22.142570] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x13e20b0 was disconnected and freed. delete nvme_qpair. 00:16:08.703 22:27:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:08.703 22:27:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:08.703 22:27:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.703 22:27:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:08.703 22:27:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:08.703 22:27:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:08.703 22:27:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:08.703 22:27:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.703 22:27:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:08.703 22:27:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:16:08.703 22:27:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:08.703 22:27:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:08.703 22:27:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:08.703 22:27:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:08.703 22:27:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.703 22:27:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:08.703 22:27:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:08.703 22:27:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:08.703 22:27:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:08.703 22:27:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.703 22:27:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:08.703 22:27:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:09.660 22:27:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:09.660 22:27:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:09.661 22:27:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.661 22:27:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:09.661 22:27:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:09.661 22:27:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:09.661 22:27:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:09.919 22:27:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.919 22:27:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:09.919 22:27:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:10.853 22:27:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:10.853 22:27:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:10.853 22:27:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.853 22:27:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:10.853 22:27:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:10.853 22:27:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:10.853 22:27:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:10.853 22:27:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.853 22:27:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:10.853 22:27:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:11.787 22:27:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:11.787 22:27:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:11.787 22:27:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:11.787 22:27:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.787 22:27:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:11.787 22:27:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:11.787 22:27:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:12.045 22:27:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.045 22:27:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:12.045 22:27:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:12.983 22:27:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:12.983 22:27:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:12.983 22:27:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:12.983 22:27:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.983 22:27:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:12.983 22:27:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:12.983 22:27:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:12.983 22:27:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.983 22:27:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:12.983 22:27:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:13.917 22:27:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:13.917 22:27:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:13.917 22:27:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:13.918 22:27:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.918 22:27:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:13.918 22:27:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:13.918 22:27:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:13.918 22:27:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.175 22:27:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:14.176 22:27:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:14.176 [2024-07-15 22:27:27.555942] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:14.176 [2024-07-15 22:27:27.556001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.176 [2024-07-15 22:27:27.556014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.176 [2024-07-15 22:27:27.556026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.176 [2024-07-15 22:27:27.556035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.176 [2024-07-15 22:27:27.556044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.176 [2024-07-15 22:27:27.556052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.176 [2024-07-15 22:27:27.556061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.176 [2024-07-15 22:27:27.556070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.176 [2024-07-15 22:27:27.556079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.176 [2024-07-15 22:27:27.556087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.176 [2024-07-15 22:27:27.556096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347c60 is same with the state(5) to be set 00:16:14.176 [2024-07-15 22:27:27.565921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1347c60 (9): Bad file descriptor 00:16:14.176 [2024-07-15 22:27:27.575925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:15.162 22:27:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:15.162 22:27:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:15.162 22:27:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:15.162 22:27:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.162 22:27:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:15.162 22:27:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:15.162 22:27:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:15.162 [2024-07-15 22:27:28.605634] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:15.162 [2024-07-15 22:27:28.605721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1347c60 with addr=10.0.0.2, port=4420 00:16:15.162 [2024-07-15 22:27:28.605741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347c60 is same with the state(5) to be set 00:16:15.162 [2024-07-15 22:27:28.605787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1347c60 (9): Bad file descriptor 00:16:15.162 [2024-07-15 22:27:28.606167] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:15.162 [2024-07-15 22:27:28.606207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:15.162 [2024-07-15 22:27:28.606220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:15.162 [2024-07-15 22:27:28.606234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:15.162 [2024-07-15 22:27:28.606257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:15.162 [2024-07-15 22:27:28.606269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:15.162 22:27:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.162 22:27:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:15.162 22:27:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:16.099 [2024-07-15 22:27:29.604697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:16.099 [2024-07-15 22:27:29.604758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:16.099 [2024-07-15 22:27:29.604769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:16.099 [2024-07-15 22:27:29.604779] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:16:16.099 [2024-07-15 22:27:29.604801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:16.099 [2024-07-15 22:27:29.604827] bdev_nvme.c:6739:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:16:16.099 [2024-07-15 22:27:29.604878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:16.099 [2024-07-15 22:27:29.604892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.099 [2024-07-15 22:27:29.604904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:16.099 [2024-07-15 22:27:29.604913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.099 [2024-07-15 22:27:29.604923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:16.099 [2024-07-15 22:27:29.604932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.099 [2024-07-15 22:27:29.604941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:16.099 [2024-07-15 22:27:29.604950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.099 [2024-07-15 22:27:29.604959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:16.099 [2024-07-15 22:27:29.604967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.099 [2024-07-15 22:27:29.604976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:16:16.099 [2024-07-15 22:27:29.605623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134ba00 (9): Bad file descriptor 00:16:16.099 [2024-07-15 22:27:29.606632] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:16.099 [2024-07-15 22:27:29.606654] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:16:16.099 22:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:16.099 22:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:16.099 22:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.099 22:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:16.099 22:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:16.099 22:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:16.099 22:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:16.099 22:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.099 22:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:16.099 22:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:16.099 22:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:16.099 22:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:16.099 22:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:16.099 22:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:16.099 22:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:16.099 22:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.099 22:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:16.099 22:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:16.099 22:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:16.356 22:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.356 22:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:16.356 22:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:17.293 22:27:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:17.293 22:27:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:17.293 22:27:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:17.293 22:27:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:17.293 22:27:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.293 22:27:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:17.293 22:27:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:17.293 22:27:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.293 22:27:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:17.293 22:27:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:18.231 [2024-07-15 22:27:31.611821] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:18.231 [2024-07-15 22:27:31.612031] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:18.231 [2024-07-15 22:27:31.612066] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:18.231 [2024-07-15 22:27:31.617842] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:16:18.231 [2024-07-15 22:27:31.673814] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:18.231 [2024-07-15 22:27:31.673866] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:18.231 [2024-07-15 22:27:31.673885] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:18.231 [2024-07-15 22:27:31.673901] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:16:18.231 [2024-07-15 22:27:31.673910] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:18.231 [2024-07-15 22:27:31.680567] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x13c9b80 was disconnected and freed. delete nvme_qpair. 00:16:18.231 22:27:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:18.231 22:27:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:18.231 22:27:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:18.231 22:27:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.231 22:27:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:18.231 22:27:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:18.231 22:27:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:18.231 22:27:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.490 22:27:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:18.490 22:27:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:18.490 22:27:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77498 00:16:18.490 22:27:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77498 ']' 00:16:18.490 22:27:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77498 00:16:18.490 22:27:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:16:18.490 22:27:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:18.490 22:27:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77498 00:16:18.490 killing process with pid 77498 00:16:18.490 22:27:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:18.490 22:27:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:18.490 22:27:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77498' 00:16:18.490 22:27:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77498 00:16:18.490 22:27:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77498 00:16:18.490 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:18.490 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:18.490 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:16:18.748 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:18.748 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:16:18.748 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:18.748 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:18.748 rmmod nvme_tcp 00:16:18.748 rmmod nvme_fabrics 00:16:18.748 rmmod nvme_keyring 00:16:18.748 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:18.748 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:16:18.748 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:16:18.748 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 77466 ']' 00:16:18.748 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 77466 00:16:18.748 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77466 ']' 00:16:18.748 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77466 00:16:18.748 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:16:18.748 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:18.748 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77466 00:16:18.748 killing process with pid 77466 00:16:18.748 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:18.748 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:18.748 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77466' 00:16:18.748 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77466 00:16:18.748 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77466 00:16:19.006 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:19.006 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:19.006 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:19.006 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:19.006 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:19.006 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.006 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.006 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.006 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:19.006 00:16:19.006 real 0m14.073s 00:16:19.006 user 0m23.508s 00:16:19.006 sys 0m3.163s 00:16:19.006 ************************************ 00:16:19.006 END TEST nvmf_discovery_remove_ifc 00:16:19.006 ************************************ 00:16:19.006 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:19.006 22:27:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:19.006 22:27:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:19.006 22:27:32 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:19.006 22:27:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:19.006 22:27:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:19.006 22:27:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:19.006 ************************************ 00:16:19.006 START TEST nvmf_identify_kernel_target 00:16:19.006 ************************************ 00:16:19.006 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:19.265 * Looking for test storage... 00:16:19.265 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:19.265 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:19.265 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:16:19.265 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:19.266 Cannot find device "nvmf_tgt_br" 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:19.266 Cannot find device "nvmf_tgt_br2" 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:19.266 Cannot find device "nvmf_tgt_br" 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:19.266 Cannot find device "nvmf_tgt_br2" 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:16:19.266 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:19.525 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:19.525 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:19.525 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:19.525 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:16:19.525 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:19.525 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:19.525 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:16:19.525 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:19.525 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:19.525 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:19.525 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:19.525 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:19.525 22:27:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:19.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:19.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:16:19.525 00:16:19.525 --- 10.0.0.2 ping statistics --- 00:16:19.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.525 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:19.525 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:19.525 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:16:19.525 00:16:19.525 --- 10.0.0.3 ping statistics --- 00:16:19.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.525 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:19.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:19.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:16:19.525 00:16:19.525 --- 10.0.0.1 ping statistics --- 00:16:19.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.525 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:19.525 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:16:19.784 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:16:19.784 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:16:19.784 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:19.784 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:19.784 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:19.784 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:19.784 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:19.784 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:19.784 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:19.784 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:19.784 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:19.784 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:16:19.784 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:16:19.784 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:16:19.784 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:16:19.784 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:19.784 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:19.784 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:19.784 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:16:19.784 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:16:19.784 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:16:19.784 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:19.784 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:20.349 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:20.349 Waiting for block devices as requested 00:16:20.349 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:20.349 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:20.607 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:20.607 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:20.607 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:16:20.607 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:16:20.607 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:20.607 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:20.607 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:16:20.607 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:20.607 22:27:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:20.607 No valid GPT data, bailing 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:20.607 No valid GPT data, bailing 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:20.607 No valid GPT data, bailing 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:16:20.607 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:16:20.608 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:20.608 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:20.608 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:16:20.608 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:20.608 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:20.865 No valid GPT data, bailing 00:16:20.865 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:20.865 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:20.865 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:20.866 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:16:20.866 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:16:20.866 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:20.866 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:20.866 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:20.866 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:16:20.866 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:16:20.866 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:16:20.866 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:16:20.866 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:16:20.866 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:16:20.866 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:16:20.866 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:16:20.866 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:20.866 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid=37374fe9-a847-4b40-94af-b766955abedc -a 10.0.0.1 -t tcp -s 4420 00:16:20.866 00:16:20.866 Discovery Log Number of Records 2, Generation counter 2 00:16:20.866 =====Discovery Log Entry 0====== 00:16:20.866 trtype: tcp 00:16:20.866 adrfam: ipv4 00:16:20.866 subtype: current discovery subsystem 00:16:20.866 treq: not specified, sq flow control disable supported 00:16:20.866 portid: 1 00:16:20.866 trsvcid: 4420 00:16:20.866 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:20.866 traddr: 10.0.0.1 00:16:20.866 eflags: none 00:16:20.866 sectype: none 00:16:20.866 =====Discovery Log Entry 1====== 00:16:20.866 trtype: tcp 00:16:20.866 adrfam: ipv4 00:16:20.866 subtype: nvme subsystem 00:16:20.866 treq: not specified, sq flow control disable supported 00:16:20.866 portid: 1 00:16:20.866 trsvcid: 4420 00:16:20.866 subnqn: nqn.2016-06.io.spdk:testnqn 00:16:20.866 traddr: 10.0.0.1 00:16:20.866 eflags: none 00:16:20.866 sectype: none 00:16:20.866 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:16:20.866 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:16:21.124 ===================================================== 00:16:21.124 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:21.124 ===================================================== 00:16:21.124 Controller Capabilities/Features 00:16:21.124 ================================ 00:16:21.124 Vendor ID: 0000 00:16:21.124 Subsystem Vendor ID: 0000 00:16:21.124 Serial Number: 648536832d90fef9598d 00:16:21.124 Model Number: Linux 00:16:21.124 Firmware Version: 6.7.0-68 00:16:21.124 Recommended Arb Burst: 0 00:16:21.124 IEEE OUI Identifier: 00 00 00 00:16:21.124 Multi-path I/O 00:16:21.124 May have multiple subsystem ports: No 00:16:21.124 May have multiple controllers: No 00:16:21.124 Associated with SR-IOV VF: No 00:16:21.124 Max Data Transfer Size: Unlimited 00:16:21.124 Max Number of Namespaces: 0 00:16:21.124 Max Number of I/O Queues: 1024 00:16:21.124 NVMe Specification Version (VS): 1.3 00:16:21.124 NVMe Specification Version (Identify): 1.3 00:16:21.124 Maximum Queue Entries: 1024 00:16:21.124 Contiguous Queues Required: No 00:16:21.124 Arbitration Mechanisms Supported 00:16:21.124 Weighted Round Robin: Not Supported 00:16:21.124 Vendor Specific: Not Supported 00:16:21.124 Reset Timeout: 7500 ms 00:16:21.124 Doorbell Stride: 4 bytes 00:16:21.124 NVM Subsystem Reset: Not Supported 00:16:21.124 Command Sets Supported 00:16:21.124 NVM Command Set: Supported 00:16:21.124 Boot Partition: Not Supported 00:16:21.124 Memory Page Size Minimum: 4096 bytes 00:16:21.124 Memory Page Size Maximum: 4096 bytes 00:16:21.124 Persistent Memory Region: Not Supported 00:16:21.124 Optional Asynchronous Events Supported 00:16:21.124 Namespace Attribute Notices: Not Supported 00:16:21.124 Firmware Activation Notices: Not Supported 00:16:21.124 ANA Change Notices: Not Supported 00:16:21.124 PLE Aggregate Log Change Notices: Not Supported 00:16:21.124 LBA Status Info Alert Notices: Not Supported 00:16:21.124 EGE Aggregate Log Change Notices: Not Supported 00:16:21.124 Normal NVM Subsystem Shutdown event: Not Supported 00:16:21.124 Zone Descriptor Change Notices: Not Supported 00:16:21.124 Discovery Log Change Notices: Supported 00:16:21.124 Controller Attributes 00:16:21.124 128-bit Host Identifier: Not Supported 00:16:21.124 Non-Operational Permissive Mode: Not Supported 00:16:21.124 NVM Sets: Not Supported 00:16:21.124 Read Recovery Levels: Not Supported 00:16:21.124 Endurance Groups: Not Supported 00:16:21.124 Predictable Latency Mode: Not Supported 00:16:21.124 Traffic Based Keep ALive: Not Supported 00:16:21.124 Namespace Granularity: Not Supported 00:16:21.124 SQ Associations: Not Supported 00:16:21.124 UUID List: Not Supported 00:16:21.124 Multi-Domain Subsystem: Not Supported 00:16:21.124 Fixed Capacity Management: Not Supported 00:16:21.124 Variable Capacity Management: Not Supported 00:16:21.124 Delete Endurance Group: Not Supported 00:16:21.124 Delete NVM Set: Not Supported 00:16:21.124 Extended LBA Formats Supported: Not Supported 00:16:21.124 Flexible Data Placement Supported: Not Supported 00:16:21.124 00:16:21.124 Controller Memory Buffer Support 00:16:21.124 ================================ 00:16:21.124 Supported: No 00:16:21.124 00:16:21.124 Persistent Memory Region Support 00:16:21.124 ================================ 00:16:21.124 Supported: No 00:16:21.124 00:16:21.124 Admin Command Set Attributes 00:16:21.124 ============================ 00:16:21.124 Security Send/Receive: Not Supported 00:16:21.124 Format NVM: Not Supported 00:16:21.124 Firmware Activate/Download: Not Supported 00:16:21.125 Namespace Management: Not Supported 00:16:21.125 Device Self-Test: Not Supported 00:16:21.125 Directives: Not Supported 00:16:21.125 NVMe-MI: Not Supported 00:16:21.125 Virtualization Management: Not Supported 00:16:21.125 Doorbell Buffer Config: Not Supported 00:16:21.125 Get LBA Status Capability: Not Supported 00:16:21.125 Command & Feature Lockdown Capability: Not Supported 00:16:21.125 Abort Command Limit: 1 00:16:21.125 Async Event Request Limit: 1 00:16:21.125 Number of Firmware Slots: N/A 00:16:21.125 Firmware Slot 1 Read-Only: N/A 00:16:21.125 Firmware Activation Without Reset: N/A 00:16:21.125 Multiple Update Detection Support: N/A 00:16:21.125 Firmware Update Granularity: No Information Provided 00:16:21.125 Per-Namespace SMART Log: No 00:16:21.125 Asymmetric Namespace Access Log Page: Not Supported 00:16:21.125 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:21.125 Command Effects Log Page: Not Supported 00:16:21.125 Get Log Page Extended Data: Supported 00:16:21.125 Telemetry Log Pages: Not Supported 00:16:21.125 Persistent Event Log Pages: Not Supported 00:16:21.125 Supported Log Pages Log Page: May Support 00:16:21.125 Commands Supported & Effects Log Page: Not Supported 00:16:21.125 Feature Identifiers & Effects Log Page:May Support 00:16:21.125 NVMe-MI Commands & Effects Log Page: May Support 00:16:21.125 Data Area 4 for Telemetry Log: Not Supported 00:16:21.125 Error Log Page Entries Supported: 1 00:16:21.125 Keep Alive: Not Supported 00:16:21.125 00:16:21.125 NVM Command Set Attributes 00:16:21.125 ========================== 00:16:21.125 Submission Queue Entry Size 00:16:21.125 Max: 1 00:16:21.125 Min: 1 00:16:21.125 Completion Queue Entry Size 00:16:21.125 Max: 1 00:16:21.125 Min: 1 00:16:21.125 Number of Namespaces: 0 00:16:21.125 Compare Command: Not Supported 00:16:21.125 Write Uncorrectable Command: Not Supported 00:16:21.125 Dataset Management Command: Not Supported 00:16:21.125 Write Zeroes Command: Not Supported 00:16:21.125 Set Features Save Field: Not Supported 00:16:21.125 Reservations: Not Supported 00:16:21.125 Timestamp: Not Supported 00:16:21.125 Copy: Not Supported 00:16:21.125 Volatile Write Cache: Not Present 00:16:21.125 Atomic Write Unit (Normal): 1 00:16:21.125 Atomic Write Unit (PFail): 1 00:16:21.125 Atomic Compare & Write Unit: 1 00:16:21.125 Fused Compare & Write: Not Supported 00:16:21.125 Scatter-Gather List 00:16:21.125 SGL Command Set: Supported 00:16:21.125 SGL Keyed: Not Supported 00:16:21.125 SGL Bit Bucket Descriptor: Not Supported 00:16:21.125 SGL Metadata Pointer: Not Supported 00:16:21.125 Oversized SGL: Not Supported 00:16:21.125 SGL Metadata Address: Not Supported 00:16:21.125 SGL Offset: Supported 00:16:21.125 Transport SGL Data Block: Not Supported 00:16:21.125 Replay Protected Memory Block: Not Supported 00:16:21.125 00:16:21.125 Firmware Slot Information 00:16:21.125 ========================= 00:16:21.125 Active slot: 0 00:16:21.125 00:16:21.125 00:16:21.125 Error Log 00:16:21.125 ========= 00:16:21.125 00:16:21.125 Active Namespaces 00:16:21.125 ================= 00:16:21.125 Discovery Log Page 00:16:21.125 ================== 00:16:21.125 Generation Counter: 2 00:16:21.125 Number of Records: 2 00:16:21.125 Record Format: 0 00:16:21.125 00:16:21.125 Discovery Log Entry 0 00:16:21.125 ---------------------- 00:16:21.125 Transport Type: 3 (TCP) 00:16:21.125 Address Family: 1 (IPv4) 00:16:21.125 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:21.125 Entry Flags: 00:16:21.125 Duplicate Returned Information: 0 00:16:21.125 Explicit Persistent Connection Support for Discovery: 0 00:16:21.125 Transport Requirements: 00:16:21.125 Secure Channel: Not Specified 00:16:21.125 Port ID: 1 (0x0001) 00:16:21.125 Controller ID: 65535 (0xffff) 00:16:21.125 Admin Max SQ Size: 32 00:16:21.125 Transport Service Identifier: 4420 00:16:21.125 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:21.125 Transport Address: 10.0.0.1 00:16:21.125 Discovery Log Entry 1 00:16:21.125 ---------------------- 00:16:21.125 Transport Type: 3 (TCP) 00:16:21.125 Address Family: 1 (IPv4) 00:16:21.125 Subsystem Type: 2 (NVM Subsystem) 00:16:21.125 Entry Flags: 00:16:21.125 Duplicate Returned Information: 0 00:16:21.125 Explicit Persistent Connection Support for Discovery: 0 00:16:21.125 Transport Requirements: 00:16:21.125 Secure Channel: Not Specified 00:16:21.125 Port ID: 1 (0x0001) 00:16:21.125 Controller ID: 65535 (0xffff) 00:16:21.125 Admin Max SQ Size: 32 00:16:21.125 Transport Service Identifier: 4420 00:16:21.125 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:16:21.125 Transport Address: 10.0.0.1 00:16:21.125 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:16:21.125 get_feature(0x01) failed 00:16:21.125 get_feature(0x02) failed 00:16:21.125 get_feature(0x04) failed 00:16:21.125 ===================================================== 00:16:21.125 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:16:21.125 ===================================================== 00:16:21.125 Controller Capabilities/Features 00:16:21.125 ================================ 00:16:21.125 Vendor ID: 0000 00:16:21.125 Subsystem Vendor ID: 0000 00:16:21.125 Serial Number: 4a14c063f1aa2d889133 00:16:21.125 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:16:21.125 Firmware Version: 6.7.0-68 00:16:21.125 Recommended Arb Burst: 6 00:16:21.125 IEEE OUI Identifier: 00 00 00 00:16:21.125 Multi-path I/O 00:16:21.125 May have multiple subsystem ports: Yes 00:16:21.125 May have multiple controllers: Yes 00:16:21.125 Associated with SR-IOV VF: No 00:16:21.125 Max Data Transfer Size: Unlimited 00:16:21.125 Max Number of Namespaces: 1024 00:16:21.125 Max Number of I/O Queues: 128 00:16:21.125 NVMe Specification Version (VS): 1.3 00:16:21.125 NVMe Specification Version (Identify): 1.3 00:16:21.125 Maximum Queue Entries: 1024 00:16:21.125 Contiguous Queues Required: No 00:16:21.125 Arbitration Mechanisms Supported 00:16:21.125 Weighted Round Robin: Not Supported 00:16:21.125 Vendor Specific: Not Supported 00:16:21.125 Reset Timeout: 7500 ms 00:16:21.125 Doorbell Stride: 4 bytes 00:16:21.125 NVM Subsystem Reset: Not Supported 00:16:21.125 Command Sets Supported 00:16:21.125 NVM Command Set: Supported 00:16:21.125 Boot Partition: Not Supported 00:16:21.125 Memory Page Size Minimum: 4096 bytes 00:16:21.125 Memory Page Size Maximum: 4096 bytes 00:16:21.125 Persistent Memory Region: Not Supported 00:16:21.125 Optional Asynchronous Events Supported 00:16:21.125 Namespace Attribute Notices: Supported 00:16:21.125 Firmware Activation Notices: Not Supported 00:16:21.125 ANA Change Notices: Supported 00:16:21.125 PLE Aggregate Log Change Notices: Not Supported 00:16:21.125 LBA Status Info Alert Notices: Not Supported 00:16:21.125 EGE Aggregate Log Change Notices: Not Supported 00:16:21.125 Normal NVM Subsystem Shutdown event: Not Supported 00:16:21.125 Zone Descriptor Change Notices: Not Supported 00:16:21.125 Discovery Log Change Notices: Not Supported 00:16:21.125 Controller Attributes 00:16:21.125 128-bit Host Identifier: Supported 00:16:21.125 Non-Operational Permissive Mode: Not Supported 00:16:21.125 NVM Sets: Not Supported 00:16:21.125 Read Recovery Levels: Not Supported 00:16:21.125 Endurance Groups: Not Supported 00:16:21.125 Predictable Latency Mode: Not Supported 00:16:21.125 Traffic Based Keep ALive: Supported 00:16:21.125 Namespace Granularity: Not Supported 00:16:21.125 SQ Associations: Not Supported 00:16:21.125 UUID List: Not Supported 00:16:21.125 Multi-Domain Subsystem: Not Supported 00:16:21.125 Fixed Capacity Management: Not Supported 00:16:21.125 Variable Capacity Management: Not Supported 00:16:21.125 Delete Endurance Group: Not Supported 00:16:21.125 Delete NVM Set: Not Supported 00:16:21.125 Extended LBA Formats Supported: Not Supported 00:16:21.125 Flexible Data Placement Supported: Not Supported 00:16:21.125 00:16:21.125 Controller Memory Buffer Support 00:16:21.125 ================================ 00:16:21.125 Supported: No 00:16:21.125 00:16:21.125 Persistent Memory Region Support 00:16:21.125 ================================ 00:16:21.125 Supported: No 00:16:21.125 00:16:21.125 Admin Command Set Attributes 00:16:21.125 ============================ 00:16:21.125 Security Send/Receive: Not Supported 00:16:21.125 Format NVM: Not Supported 00:16:21.125 Firmware Activate/Download: Not Supported 00:16:21.125 Namespace Management: Not Supported 00:16:21.125 Device Self-Test: Not Supported 00:16:21.125 Directives: Not Supported 00:16:21.125 NVMe-MI: Not Supported 00:16:21.125 Virtualization Management: Not Supported 00:16:21.125 Doorbell Buffer Config: Not Supported 00:16:21.125 Get LBA Status Capability: Not Supported 00:16:21.125 Command & Feature Lockdown Capability: Not Supported 00:16:21.125 Abort Command Limit: 4 00:16:21.125 Async Event Request Limit: 4 00:16:21.125 Number of Firmware Slots: N/A 00:16:21.125 Firmware Slot 1 Read-Only: N/A 00:16:21.125 Firmware Activation Without Reset: N/A 00:16:21.125 Multiple Update Detection Support: N/A 00:16:21.125 Firmware Update Granularity: No Information Provided 00:16:21.125 Per-Namespace SMART Log: Yes 00:16:21.125 Asymmetric Namespace Access Log Page: Supported 00:16:21.126 ANA Transition Time : 10 sec 00:16:21.126 00:16:21.126 Asymmetric Namespace Access Capabilities 00:16:21.126 ANA Optimized State : Supported 00:16:21.126 ANA Non-Optimized State : Supported 00:16:21.126 ANA Inaccessible State : Supported 00:16:21.126 ANA Persistent Loss State : Supported 00:16:21.126 ANA Change State : Supported 00:16:21.126 ANAGRPID is not changed : No 00:16:21.126 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:16:21.126 00:16:21.126 ANA Group Identifier Maximum : 128 00:16:21.126 Number of ANA Group Identifiers : 128 00:16:21.126 Max Number of Allowed Namespaces : 1024 00:16:21.126 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:16:21.126 Command Effects Log Page: Supported 00:16:21.126 Get Log Page Extended Data: Supported 00:16:21.126 Telemetry Log Pages: Not Supported 00:16:21.126 Persistent Event Log Pages: Not Supported 00:16:21.126 Supported Log Pages Log Page: May Support 00:16:21.126 Commands Supported & Effects Log Page: Not Supported 00:16:21.126 Feature Identifiers & Effects Log Page:May Support 00:16:21.126 NVMe-MI Commands & Effects Log Page: May Support 00:16:21.126 Data Area 4 for Telemetry Log: Not Supported 00:16:21.126 Error Log Page Entries Supported: 128 00:16:21.126 Keep Alive: Supported 00:16:21.126 Keep Alive Granularity: 1000 ms 00:16:21.126 00:16:21.126 NVM Command Set Attributes 00:16:21.126 ========================== 00:16:21.126 Submission Queue Entry Size 00:16:21.126 Max: 64 00:16:21.126 Min: 64 00:16:21.126 Completion Queue Entry Size 00:16:21.126 Max: 16 00:16:21.126 Min: 16 00:16:21.126 Number of Namespaces: 1024 00:16:21.126 Compare Command: Not Supported 00:16:21.126 Write Uncorrectable Command: Not Supported 00:16:21.126 Dataset Management Command: Supported 00:16:21.126 Write Zeroes Command: Supported 00:16:21.126 Set Features Save Field: Not Supported 00:16:21.126 Reservations: Not Supported 00:16:21.126 Timestamp: Not Supported 00:16:21.126 Copy: Not Supported 00:16:21.126 Volatile Write Cache: Present 00:16:21.126 Atomic Write Unit (Normal): 1 00:16:21.126 Atomic Write Unit (PFail): 1 00:16:21.126 Atomic Compare & Write Unit: 1 00:16:21.126 Fused Compare & Write: Not Supported 00:16:21.126 Scatter-Gather List 00:16:21.126 SGL Command Set: Supported 00:16:21.126 SGL Keyed: Not Supported 00:16:21.126 SGL Bit Bucket Descriptor: Not Supported 00:16:21.126 SGL Metadata Pointer: Not Supported 00:16:21.126 Oversized SGL: Not Supported 00:16:21.126 SGL Metadata Address: Not Supported 00:16:21.126 SGL Offset: Supported 00:16:21.126 Transport SGL Data Block: Not Supported 00:16:21.126 Replay Protected Memory Block: Not Supported 00:16:21.126 00:16:21.126 Firmware Slot Information 00:16:21.126 ========================= 00:16:21.126 Active slot: 0 00:16:21.126 00:16:21.126 Asymmetric Namespace Access 00:16:21.126 =========================== 00:16:21.126 Change Count : 0 00:16:21.126 Number of ANA Group Descriptors : 1 00:16:21.126 ANA Group Descriptor : 0 00:16:21.126 ANA Group ID : 1 00:16:21.126 Number of NSID Values : 1 00:16:21.126 Change Count : 0 00:16:21.126 ANA State : 1 00:16:21.126 Namespace Identifier : 1 00:16:21.126 00:16:21.126 Commands Supported and Effects 00:16:21.126 ============================== 00:16:21.126 Admin Commands 00:16:21.126 -------------- 00:16:21.126 Get Log Page (02h): Supported 00:16:21.126 Identify (06h): Supported 00:16:21.126 Abort (08h): Supported 00:16:21.126 Set Features (09h): Supported 00:16:21.126 Get Features (0Ah): Supported 00:16:21.126 Asynchronous Event Request (0Ch): Supported 00:16:21.126 Keep Alive (18h): Supported 00:16:21.126 I/O Commands 00:16:21.126 ------------ 00:16:21.126 Flush (00h): Supported 00:16:21.126 Write (01h): Supported LBA-Change 00:16:21.126 Read (02h): Supported 00:16:21.126 Write Zeroes (08h): Supported LBA-Change 00:16:21.126 Dataset Management (09h): Supported 00:16:21.126 00:16:21.126 Error Log 00:16:21.126 ========= 00:16:21.126 Entry: 0 00:16:21.126 Error Count: 0x3 00:16:21.126 Submission Queue Id: 0x0 00:16:21.126 Command Id: 0x5 00:16:21.126 Phase Bit: 0 00:16:21.126 Status Code: 0x2 00:16:21.126 Status Code Type: 0x0 00:16:21.126 Do Not Retry: 1 00:16:21.126 Error Location: 0x28 00:16:21.126 LBA: 0x0 00:16:21.126 Namespace: 0x0 00:16:21.126 Vendor Log Page: 0x0 00:16:21.126 ----------- 00:16:21.126 Entry: 1 00:16:21.126 Error Count: 0x2 00:16:21.126 Submission Queue Id: 0x0 00:16:21.126 Command Id: 0x5 00:16:21.126 Phase Bit: 0 00:16:21.126 Status Code: 0x2 00:16:21.126 Status Code Type: 0x0 00:16:21.126 Do Not Retry: 1 00:16:21.126 Error Location: 0x28 00:16:21.126 LBA: 0x0 00:16:21.126 Namespace: 0x0 00:16:21.126 Vendor Log Page: 0x0 00:16:21.126 ----------- 00:16:21.126 Entry: 2 00:16:21.126 Error Count: 0x1 00:16:21.126 Submission Queue Id: 0x0 00:16:21.126 Command Id: 0x4 00:16:21.126 Phase Bit: 0 00:16:21.126 Status Code: 0x2 00:16:21.126 Status Code Type: 0x0 00:16:21.126 Do Not Retry: 1 00:16:21.126 Error Location: 0x28 00:16:21.126 LBA: 0x0 00:16:21.126 Namespace: 0x0 00:16:21.126 Vendor Log Page: 0x0 00:16:21.126 00:16:21.126 Number of Queues 00:16:21.126 ================ 00:16:21.126 Number of I/O Submission Queues: 128 00:16:21.126 Number of I/O Completion Queues: 128 00:16:21.126 00:16:21.126 ZNS Specific Controller Data 00:16:21.126 ============================ 00:16:21.126 Zone Append Size Limit: 0 00:16:21.126 00:16:21.126 00:16:21.126 Active Namespaces 00:16:21.126 ================= 00:16:21.126 get_feature(0x05) failed 00:16:21.126 Namespace ID:1 00:16:21.126 Command Set Identifier: NVM (00h) 00:16:21.126 Deallocate: Supported 00:16:21.126 Deallocated/Unwritten Error: Not Supported 00:16:21.126 Deallocated Read Value: Unknown 00:16:21.126 Deallocate in Write Zeroes: Not Supported 00:16:21.126 Deallocated Guard Field: 0xFFFF 00:16:21.126 Flush: Supported 00:16:21.126 Reservation: Not Supported 00:16:21.126 Namespace Sharing Capabilities: Multiple Controllers 00:16:21.126 Size (in LBAs): 1310720 (5GiB) 00:16:21.126 Capacity (in LBAs): 1310720 (5GiB) 00:16:21.126 Utilization (in LBAs): 1310720 (5GiB) 00:16:21.126 UUID: 2b45945e-f9d4-4e7d-a659-6883a4eb598c 00:16:21.126 Thin Provisioning: Not Supported 00:16:21.126 Per-NS Atomic Units: Yes 00:16:21.126 Atomic Boundary Size (Normal): 0 00:16:21.126 Atomic Boundary Size (PFail): 0 00:16:21.126 Atomic Boundary Offset: 0 00:16:21.126 NGUID/EUI64 Never Reused: No 00:16:21.126 ANA group ID: 1 00:16:21.126 Namespace Write Protected: No 00:16:21.126 Number of LBA Formats: 1 00:16:21.126 Current LBA Format: LBA Format #00 00:16:21.126 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:16:21.126 00:16:21.126 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:16:21.126 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:21.126 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:21.384 rmmod nvme_tcp 00:16:21.384 rmmod nvme_fabrics 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:16:21.384 22:27:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:22.352 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:22.352 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:22.352 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:22.611 00:16:22.611 real 0m3.439s 00:16:22.611 user 0m1.099s 00:16:22.611 sys 0m1.897s 00:16:22.611 22:27:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:22.611 22:27:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.611 ************************************ 00:16:22.611 END TEST nvmf_identify_kernel_target 00:16:22.611 ************************************ 00:16:22.611 22:27:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:22.611 22:27:36 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:22.611 22:27:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:22.611 22:27:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:22.611 22:27:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:22.611 ************************************ 00:16:22.611 START TEST nvmf_auth_host 00:16:22.611 ************************************ 00:16:22.611 22:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:22.611 * Looking for test storage... 00:16:22.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:22.612 22:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:22.612 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:16:22.612 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.612 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.612 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.612 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.612 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.612 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.612 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.612 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.612 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.612 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.870 22:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:22.871 Cannot find device "nvmf_tgt_br" 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:22.871 Cannot find device "nvmf_tgt_br2" 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:22.871 Cannot find device "nvmf_tgt_br" 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:22.871 Cannot find device "nvmf_tgt_br2" 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:22.871 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:22.871 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:22.871 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:23.128 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:23.128 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:23.128 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:23.128 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:23.128 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:23.128 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:23.128 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:23.128 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:23.128 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:23.128 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:23.128 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:23.128 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:23.128 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:23.128 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:23.128 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:23.128 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:23.128 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:23.128 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:23.128 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:23.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:16:23.128 00:16:23.128 --- 10.0.0.2 ping statistics --- 00:16:23.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.128 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:16:23.128 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:23.128 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:23.128 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:16:23.128 00:16:23.128 --- 10.0.0.3 ping statistics --- 00:16:23.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.129 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:16:23.129 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:23.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:16:23.129 00:16:23.129 --- 10.0.0.1 ping statistics --- 00:16:23.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.129 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:16:23.129 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.129 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:16:23.129 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:23.129 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.129 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:23.129 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:23.129 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.129 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:23.129 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:23.129 22:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:16:23.129 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:23.129 22:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:23.129 22:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.129 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=78387 00:16:23.129 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 78387 00:16:23.129 22:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78387 ']' 00:16:23.129 22:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.129 22:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:23.129 22:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.129 22:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:23.129 22:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.129 22:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=36fd238814e6ea68edd0f6e1d8638ee4 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.w9N 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 36fd238814e6ea68edd0f6e1d8638ee4 0 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 36fd238814e6ea68edd0f6e1d8638ee4 0 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=36fd238814e6ea68edd0f6e1d8638ee4 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.w9N 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.w9N 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.w9N 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=37ef3ada5330666f86b1fcc5ae5956143ada13e9f576e507107a8f23cadc6301 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.pGC 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 37ef3ada5330666f86b1fcc5ae5956143ada13e9f576e507107a8f23cadc6301 3 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 37ef3ada5330666f86b1fcc5ae5956143ada13e9f576e507107a8f23cadc6301 3 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=37ef3ada5330666f86b1fcc5ae5956143ada13e9f576e507107a8f23cadc6301 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:16:24.066 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.pGC 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.pGC 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.pGC 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=616d7e61a091c549e38980519fa5c2ab4cc53d7c6c53f075 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.XJR 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 616d7e61a091c549e38980519fa5c2ab4cc53d7c6c53f075 0 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 616d7e61a091c549e38980519fa5c2ab4cc53d7c6c53f075 0 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=616d7e61a091c549e38980519fa5c2ab4cc53d7c6c53f075 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.XJR 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.XJR 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.XJR 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8e30b62014e64e365e8c7e3537569520356c5dc88677d53c 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.yPZ 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8e30b62014e64e365e8c7e3537569520356c5dc88677d53c 2 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8e30b62014e64e365e8c7e3537569520356c5dc88677d53c 2 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8e30b62014e64e365e8c7e3537569520356c5dc88677d53c 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.yPZ 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.yPZ 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.yPZ 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1b7eb92544b5a0ae39546506ebe7661b 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ZLK 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1b7eb92544b5a0ae39546506ebe7661b 1 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1b7eb92544b5a0ae39546506ebe7661b 1 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1b7eb92544b5a0ae39546506ebe7661b 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ZLK 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ZLK 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ZLK 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.326 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:24.327 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:16:24.327 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:24.327 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:24.585 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=318a26df0b693f2e6c1bb4d3b3376806 00:16:24.585 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:24.585 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.EqA 00:16:24.585 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 318a26df0b693f2e6c1bb4d3b3376806 1 00:16:24.585 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 318a26df0b693f2e6c1bb4d3b3376806 1 00:16:24.585 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:24.585 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:24.585 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=318a26df0b693f2e6c1bb4d3b3376806 00:16:24.585 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:16:24.585 22:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.EqA 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.EqA 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.EqA 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b22abb775b3c60048888f38797eee4ac54ba5aecc2d96128 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.CjG 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b22abb775b3c60048888f38797eee4ac54ba5aecc2d96128 2 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b22abb775b3c60048888f38797eee4ac54ba5aecc2d96128 2 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b22abb775b3c60048888f38797eee4ac54ba5aecc2d96128 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.CjG 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.CjG 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.CjG 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:24.585 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=07cf1f5effabab62a3b8cea1e426939a 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.TIN 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 07cf1f5effabab62a3b8cea1e426939a 0 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 07cf1f5effabab62a3b8cea1e426939a 0 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=07cf1f5effabab62a3b8cea1e426939a 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.TIN 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.TIN 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.TIN 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=45f34d5076cdfb7fa79f020f5ea310ff144f247d953733abdabb4a251b7759a7 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.aIj 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 45f34d5076cdfb7fa79f020f5ea310ff144f247d953733abdabb4a251b7759a7 3 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 45f34d5076cdfb7fa79f020f5ea310ff144f247d953733abdabb4a251b7759a7 3 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=45f34d5076cdfb7fa79f020f5ea310ff144f247d953733abdabb4a251b7759a7 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.aIj 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.aIj 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.aIj 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78387 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78387 ']' 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:24.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:24.586 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.w9N 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.pGC ]] 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pGC 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.XJR 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.yPZ ]] 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yPZ 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ZLK 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.EqA ]] 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EqA 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.844 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.CjG 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.TIN ]] 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.TIN 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.aIj 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:16:24.845 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:25.103 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:25.103 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:25.103 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:16:25.103 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:16:25.103 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:16:25.103 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:25.103 22:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:25.669 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:25.669 Waiting for block devices as requested 00:16:25.669 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:25.669 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:26.604 No valid GPT data, bailing 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:26.604 No valid GPT data, bailing 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:26.604 22:27:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:26.863 No valid GPT data, bailing 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:26.863 No valid GPT data, bailing 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:16:26.863 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:26.864 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid=37374fe9-a847-4b40-94af-b766955abedc -a 10.0.0.1 -t tcp -s 4420 00:16:26.864 00:16:26.864 Discovery Log Number of Records 2, Generation counter 2 00:16:26.864 =====Discovery Log Entry 0====== 00:16:26.864 trtype: tcp 00:16:26.864 adrfam: ipv4 00:16:26.864 subtype: current discovery subsystem 00:16:26.864 treq: not specified, sq flow control disable supported 00:16:26.864 portid: 1 00:16:26.864 trsvcid: 4420 00:16:26.864 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:26.864 traddr: 10.0.0.1 00:16:26.864 eflags: none 00:16:26.864 sectype: none 00:16:26.864 =====Discovery Log Entry 1====== 00:16:26.864 trtype: tcp 00:16:26.864 adrfam: ipv4 00:16:26.864 subtype: nvme subsystem 00:16:26.864 treq: not specified, sq flow control disable supported 00:16:26.864 portid: 1 00:16:26.864 trsvcid: 4420 00:16:26.864 subnqn: nqn.2024-02.io.spdk:cnode0 00:16:26.864 traddr: 10.0.0.1 00:16:26.864 eflags: none 00:16:26.864 sectype: none 00:16:26.864 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:26.864 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:16:26.864 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:26.864 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:26.864 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:26.864 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:26.864 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:26.864 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:26.864 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:26.864 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:26.864 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:26.864 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: ]] 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:27.122 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.123 nvme0n1 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: ]] 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.123 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.381 nvme0n1 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: ]] 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:27.381 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:27.382 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:27.382 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:27.382 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:27.382 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:27.382 22:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:27.382 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.382 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.382 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.382 nvme0n1 00:16:27.382 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.382 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:27.382 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.382 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.382 22:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:27.382 22:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: ]] 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.640 nvme0n1 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: ]] 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:27.640 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:27.641 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:27.641 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:27.641 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:27.641 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.641 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.641 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.641 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:27.641 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:27.641 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:27.641 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:27.641 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:27.641 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:27.641 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:27.641 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:27.641 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:27.641 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:27.641 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:27.641 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:27.641 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.641 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.900 nvme0n1 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.900 nvme0n1 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.900 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: ]] 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.159 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.416 nvme0n1 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: ]] 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:28.416 22:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:28.417 22:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.417 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.417 22:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.673 nvme0n1 00:16:28.673 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.673 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:28.673 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.673 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:28.673 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.673 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.673 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.673 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:28.673 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.673 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.673 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.673 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:28.673 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:16:28.673 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:28.673 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:28.673 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:28.673 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:28.673 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:28.673 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:28.673 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:28.673 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:28.673 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:28.673 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: ]] 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.674 nvme0n1 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.674 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: ]] 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.933 nvme0n1 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:28.933 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:16:28.934 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:28.934 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:28.934 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:28.934 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:28.934 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:28.934 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:28.934 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.934 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.934 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.934 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:28.934 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:28.934 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:28.934 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:28.934 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:28.934 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:28.934 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:28.934 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:28.934 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:28.934 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:28.934 22:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:28.934 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:28.934 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.934 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.192 nvme0n1 00:16:29.192 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.192 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:29.192 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.192 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.192 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:29.192 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.192 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.192 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:29.192 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.192 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.192 22:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.192 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.192 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:29.192 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:16:29.192 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:29.192 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:29.192 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:29.192 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:29.192 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:29.192 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:29.192 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:29.192 22:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: ]] 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.769 nvme0n1 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:29.769 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:16:30.028 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:30.028 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:30.028 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:30.028 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:30.028 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:30.028 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:30.028 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:30.028 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:30.028 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:30.028 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: ]] 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.029 nvme0n1 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: ]] 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.029 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.287 nvme0n1 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: ]] 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:30.287 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:30.546 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:30.546 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:30.546 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:30.546 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:30.546 22:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:30.546 22:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:30.546 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.546 22:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.546 nvme0n1 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:30.546 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:30.804 22:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.804 22:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.804 nvme0n1 00:16:30.804 22:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.804 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:30.804 22:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.804 22:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.804 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:30.804 22:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.804 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.804 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:30.804 22:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.804 22:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.804 22:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.804 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.804 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:30.804 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:16:30.804 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:30.804 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:30.804 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:30.804 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:30.804 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:30.804 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:30.804 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:30.804 22:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: ]] 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.180 22:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.441 nvme0n1 00:16:32.441 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.441 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:32.441 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.441 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:32.441 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: ]] 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.728 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.729 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.729 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:32.729 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:32.729 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:32.729 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:32.729 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:32.729 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:32.729 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:32.729 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:32.729 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:32.729 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:32.729 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:32.729 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.729 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.729 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.988 nvme0n1 00:16:32.988 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.988 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:32.988 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:32.988 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.988 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.988 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.988 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.988 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:32.988 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.988 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.988 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.988 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:32.988 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:16:32.988 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:32.988 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:32.988 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:32.988 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:32.988 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:32.988 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:32.988 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:32.988 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:32.988 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:32.989 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: ]] 00:16:32.989 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:32.989 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:16:32.989 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:32.989 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:32.989 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:32.989 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:32.989 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:32.989 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:32.989 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.989 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.989 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.989 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:32.989 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:32.989 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:32.989 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:32.989 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:32.989 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:32.989 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:32.989 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:32.989 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:32.989 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:32.989 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:32.989 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.989 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.989 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.247 nvme0n1 00:16:33.247 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.248 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:33.248 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:33.248 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.248 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.248 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.248 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.248 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:33.248 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.248 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.248 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.248 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:33.248 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:16:33.248 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:33.248 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:33.248 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:33.248 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:33.248 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:33.248 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:33.248 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:33.248 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:33.248 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:33.248 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: ]] 00:16:33.248 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:33.248 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:16:33.248 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:33.248 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:33.506 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:33.506 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:33.506 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:33.506 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:33.506 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.506 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.506 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.506 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:33.506 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:33.506 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:33.506 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:33.506 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:33.506 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:33.506 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:33.506 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:33.506 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:33.506 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:33.506 22:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:33.506 22:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:33.506 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.506 22:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.764 nvme0n1 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.764 22:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.022 nvme0n1 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: ]] 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.022 22:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.279 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:34.279 22:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:34.279 22:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:34.279 22:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:34.279 22:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:34.279 22:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:34.279 22:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:34.279 22:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:34.279 22:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:34.279 22:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:34.279 22:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:34.279 22:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.280 22:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.280 22:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.844 nvme0n1 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: ]] 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.844 22:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.424 nvme0n1 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: ]] 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.424 22:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.988 nvme0n1 00:16:35.988 22:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.988 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:35.988 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:35.988 22:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.988 22:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: ]] 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.989 22:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.553 nvme0n1 00:16:36.553 22:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.553 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:36.553 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:36.553 22:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.553 22:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.553 22:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.553 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.553 22:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:36.553 22:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.553 22:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.553 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.119 nvme0n1 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: ]] 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.119 nvme0n1 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.119 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: ]] 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.378 nvme0n1 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: ]] 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.378 22:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.638 nvme0n1 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: ]] 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.638 nvme0n1 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:37.638 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:37.897 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:37.897 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:37.897 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:37.897 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:37.897 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:37.897 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:37.897 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:16:37.897 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:37.897 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:37.897 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:37.897 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:37.897 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:37.897 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:37.897 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.897 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.897 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.897 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:37.897 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:37.897 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:37.897 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:37.897 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:37.897 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:37.897 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:37.897 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.898 nvme0n1 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: ]] 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.898 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.157 nvme0n1 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: ]] 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.157 nvme0n1 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:38.157 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.417 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.417 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:38.417 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.417 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.417 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.417 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:38.417 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:16:38.417 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:38.417 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:38.417 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:38.417 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:38.417 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:38.417 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:38.417 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:38.417 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:38.417 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:38.417 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: ]] 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.418 nvme0n1 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.418 22:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: ]] 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.418 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.677 nvme0n1 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.677 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.936 nvme0n1 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: ]] 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:38.936 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:38.937 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:38.937 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:38.937 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:38.937 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:38.937 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:38.937 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:38.937 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:38.937 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.937 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.937 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.196 nvme0n1 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: ]] 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.196 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.454 nvme0n1 00:16:39.455 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.455 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:39.455 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:39.455 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.455 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.455 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.455 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.455 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:39.455 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.455 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.455 22:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.455 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:39.455 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:16:39.455 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:39.455 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:39.455 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:39.455 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:39.455 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:39.455 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:39.455 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:39.455 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:39.455 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:39.455 22:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: ]] 00:16:39.455 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:39.455 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:16:39.455 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:39.455 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:39.455 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:39.455 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:39.455 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:39.455 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:39.455 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.455 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.455 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.455 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:39.455 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:39.455 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:39.455 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:39.455 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:39.455 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:39.455 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:39.455 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:39.455 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:39.455 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:39.455 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:39.455 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.455 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.455 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.714 nvme0n1 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: ]] 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.714 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.972 nvme0n1 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.972 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.230 nvme0n1 00:16:40.230 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.230 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:40.230 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.230 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:40.230 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.230 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.230 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.230 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:40.230 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.230 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: ]] 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.231 22:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.489 nvme0n1 00:16:40.489 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.749 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:40.749 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.749 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.749 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:40.749 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.749 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.749 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:40.749 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.749 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.749 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.749 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: ]] 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.750 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.061 nvme0n1 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: ]] 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.061 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.326 nvme0n1 00:16:41.326 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.326 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:41.326 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.326 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:41.326 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.326 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.326 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.326 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:41.326 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.326 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: ]] 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:41.585 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:41.586 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:41.586 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:41.586 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:41.586 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:41.586 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:41.586 22:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:41.586 22:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:41.586 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.586 22:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.844 nvme0n1 00:16:41.844 22:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.844 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:41.844 22:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.844 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:41.844 22:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.844 22:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.844 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.844 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:41.844 22:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.844 22:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.844 22:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.844 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:41.844 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:16:41.844 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:41.844 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.845 22:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.104 nvme0n1 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: ]] 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.104 22:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.363 22:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.363 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:42.363 22:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:42.363 22:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:42.363 22:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:42.363 22:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.363 22:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.363 22:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:42.363 22:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.363 22:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:42.363 22:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:42.363 22:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:42.363 22:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.363 22:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.363 22:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.930 nvme0n1 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: ]] 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:42.930 22:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.931 22:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:42.931 22:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:42.931 22:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:42.931 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.931 22:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.931 22:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.498 nvme0n1 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: ]] 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.498 22:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.066 nvme0n1 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: ]] 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:44.066 22:27:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:44.067 22:27:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:44.067 22:27:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:44.067 22:27:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.067 22:27:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.067 22:27:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:44.067 22:27:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.067 22:27:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:44.067 22:27:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:44.067 22:27:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:44.067 22:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:44.067 22:27:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.067 22:27:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.634 nvme0n1 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.634 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.202 nvme0n1 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: ]] 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.202 nvme0n1 00:16:45.202 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.461 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: ]] 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.462 22:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.462 nvme0n1 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: ]] 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:45.462 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.721 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:45.721 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.721 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.721 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.721 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.721 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:45.721 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:45.721 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:45.721 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.721 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.721 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:45.721 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.721 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:45.721 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:45.721 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:45.721 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.721 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.721 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.721 nvme0n1 00:16:45.721 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.721 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:45.721 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.721 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.721 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.721 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: ]] 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.722 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.981 nvme0n1 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.981 nvme0n1 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.981 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: ]] 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.241 nvme0n1 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: ]] 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.241 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.520 nvme0n1 00:16:46.520 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.520 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:46.520 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.520 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.520 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:46.520 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.520 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.520 22:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:46.520 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.520 22:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: ]] 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.520 nvme0n1 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:46.520 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.779 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.779 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:46.779 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.779 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: ]] 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.780 nvme0n1 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.780 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.039 nvme0n1 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: ]] 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.039 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.298 nvme0n1 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: ]] 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.298 22:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.557 nvme0n1 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: ]] 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.557 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.817 nvme0n1 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: ]] 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.817 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.076 nvme0n1 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.076 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.336 nvme0n1 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: ]] 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.336 22:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.608 nvme0n1 00:16:48.608 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.608 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.608 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.608 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.608 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.608 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: ]] 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.882 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.883 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.141 nvme0n1 00:16:49.141 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.141 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.141 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.141 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.141 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.141 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.141 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.141 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.141 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.141 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.141 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.141 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.141 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:16:49.141 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.141 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:49.141 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:49.141 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:49.141 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:49.141 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: ]] 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.142 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.401 nvme0n1 00:16:49.401 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.401 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.401 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.401 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.401 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.401 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.401 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.401 22:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.401 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.401 22:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: ]] 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.401 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.968 nvme0n1 00:16:49.968 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.968 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.969 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.227 nvme0n1 00:16:50.227 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.227 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.227 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.227 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.227 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.227 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.227 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.227 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.227 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.227 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.227 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.227 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.227 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.227 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:16:50.227 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.227 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:50.227 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:50.227 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:50.227 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:50.227 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:50.227 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:50.227 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:50.227 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmZDIzODgxNGU2ZWE2OGVkZDBmNmUxZDg2MzhlZTQVz/Pn: 00:16:50.227 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: ]] 00:16:50.228 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzdlZjNhZGE1MzMwNjY2Zjg2YjFmY2M1YWU1OTU2MTQzYWRhMTNlOWY1NzZlNTA3MTA3YThmMjNjYWRjNjMwMe10gk0=: 00:16:50.228 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:16:50.228 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.228 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:50.228 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:50.228 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:50.228 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.228 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:50.228 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.228 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.228 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.228 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.228 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:50.228 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:50.228 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:50.228 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.228 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.228 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:50.228 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.228 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:50.228 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:50.228 22:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:50.228 22:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.228 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.228 22:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.795 nvme0n1 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: ]] 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.795 22:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.363 nvme0n1 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI3ZWI5MjU0NGI1YTBhZTM5NTQ2NTA2ZWJlNzY2MWI5dSGe: 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: ]] 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzE4YTI2ZGYwYjY5M2YyZTZjMWJiNGQzYjMzNzY4MDZMb2tI: 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.363 22:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.930 nvme0n1 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjIyYWJiNzc1YjNjNjAwNDg4ODhmMzg3OTdlZWU0YWM1NGJhNWFlY2MyZDk2MTI4oV6o0A==: 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: ]] 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDdjZjFmNWVmZmFiYWI2MmEzYjhjZWExZTQyNjkzOWHOn0rq: 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:51.930 22:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:51.931 22:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.931 22:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.931 22:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:51.931 22:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.931 22:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:51.931 22:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:51.931 22:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:51.931 22:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:51.931 22:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.931 22:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.497 nvme0n1 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVmMzRkNTA3NmNkZmI3ZmE3OWYwMjBmNWVhMzEwZmYxNDRmMjQ3ZDk1MzczM2FiZGFiYjRhMjUxYjc3NTlhN4XUz0Y=: 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.497 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:52.498 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.498 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.498 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.498 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.498 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:52.498 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:52.498 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:52.498 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.498 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.498 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:52.498 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.498 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:52.498 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:52.498 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:52.498 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:52.498 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.498 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.064 nvme0n1 00:16:53.064 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.064 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.064 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.064 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.064 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.064 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.064 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.064 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.064 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.064 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.064 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.064 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:53.064 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.064 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:53.064 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:53.064 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:53.064 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:53.064 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE2ZDdlNjFhMDkxYzU0OWUzODk4MDUxOWZhNWMyYWI0Y2M1M2Q3YzZjNTNmMDc1QNmv2w==: 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: ]] 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGUzMGI2MjAxNGU2NGUzNjVlOGM3ZTM1Mzc1Njk1MjAzNTZjNWRjODg2NzdkNTNjB5n8vg==: 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.065 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.326 request: 00:16:53.326 { 00:16:53.326 "name": "nvme0", 00:16:53.326 "trtype": "tcp", 00:16:53.326 "traddr": "10.0.0.1", 00:16:53.326 "adrfam": "ipv4", 00:16:53.326 "trsvcid": "4420", 00:16:53.326 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:53.326 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:53.326 "prchk_reftag": false, 00:16:53.326 "prchk_guard": false, 00:16:53.326 "hdgst": false, 00:16:53.326 "ddgst": false, 00:16:53.326 "method": "bdev_nvme_attach_controller", 00:16:53.326 "req_id": 1 00:16:53.326 } 00:16:53.326 Got JSON-RPC error response 00:16:53.326 response: 00:16:53.326 { 00:16:53.326 "code": -5, 00:16:53.326 "message": "Input/output error" 00:16:53.326 } 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.326 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.326 request: 00:16:53.326 { 00:16:53.326 "name": "nvme0", 00:16:53.326 "trtype": "tcp", 00:16:53.326 "traddr": "10.0.0.1", 00:16:53.326 "adrfam": "ipv4", 00:16:53.326 "trsvcid": "4420", 00:16:53.326 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:53.326 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:53.326 "prchk_reftag": false, 00:16:53.326 "prchk_guard": false, 00:16:53.327 "hdgst": false, 00:16:53.327 "ddgst": false, 00:16:53.327 "dhchap_key": "key2", 00:16:53.327 "method": "bdev_nvme_attach_controller", 00:16:53.327 "req_id": 1 00:16:53.327 } 00:16:53.327 Got JSON-RPC error response 00:16:53.327 response: 00:16:53.327 { 00:16:53.327 "code": -5, 00:16:53.327 "message": "Input/output error" 00:16:53.327 } 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.327 request: 00:16:53.327 { 00:16:53.327 "name": "nvme0", 00:16:53.327 "trtype": "tcp", 00:16:53.327 "traddr": "10.0.0.1", 00:16:53.327 "adrfam": "ipv4", 00:16:53.327 "trsvcid": "4420", 00:16:53.327 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:53.327 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:53.327 "prchk_reftag": false, 00:16:53.327 "prchk_guard": false, 00:16:53.327 "hdgst": false, 00:16:53.327 "ddgst": false, 00:16:53.327 "dhchap_key": "key1", 00:16:53.327 "dhchap_ctrlr_key": "ckey2", 00:16:53.327 "method": "bdev_nvme_attach_controller", 00:16:53.327 "req_id": 1 00:16:53.327 } 00:16:53.327 Got JSON-RPC error response 00:16:53.327 response: 00:16:53.327 { 00:16:53.327 "code": -5, 00:16:53.327 "message": "Input/output error" 00:16:53.327 } 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:53.327 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:53.327 rmmod nvme_tcp 00:16:53.327 rmmod nvme_fabrics 00:16:53.586 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:53.586 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:16:53.586 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:16:53.586 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 78387 ']' 00:16:53.586 22:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 78387 00:16:53.586 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 78387 ']' 00:16:53.586 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 78387 00:16:53.586 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:16:53.586 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:53.586 22:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78387 00:16:53.586 killing process with pid 78387 00:16:53.586 22:28:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:53.586 22:28:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:53.586 22:28:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78387' 00:16:53.586 22:28:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 78387 00:16:53.586 22:28:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 78387 00:16:53.586 22:28:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:53.586 22:28:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:53.586 22:28:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:53.586 22:28:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:53.586 22:28:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:53.586 22:28:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.586 22:28:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.586 22:28:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.844 22:28:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:53.844 22:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:53.844 22:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:53.844 22:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:16:53.844 22:28:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:16:53.844 22:28:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:16:53.844 22:28:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:53.844 22:28:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:53.844 22:28:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:53.844 22:28:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:53.844 22:28:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:16:53.844 22:28:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:16:53.844 22:28:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:54.852 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:54.852 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:54.852 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:54.852 22:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.w9N /tmp/spdk.key-null.XJR /tmp/spdk.key-sha256.ZLK /tmp/spdk.key-sha384.CjG /tmp/spdk.key-sha512.aIj /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:16:54.852 22:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:55.422 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:55.422 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:55.422 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:55.422 00:16:55.422 real 0m32.903s 00:16:55.422 user 0m29.896s 00:16:55.422 sys 0m4.906s 00:16:55.422 22:28:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:55.422 22:28:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.422 ************************************ 00:16:55.422 END TEST nvmf_auth_host 00:16:55.422 ************************************ 00:16:55.681 22:28:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:55.681 22:28:09 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:16:55.681 22:28:09 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:55.681 22:28:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:55.681 22:28:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:55.681 22:28:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:55.681 ************************************ 00:16:55.681 START TEST nvmf_digest 00:16:55.681 ************************************ 00:16:55.681 22:28:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:55.681 * Looking for test storage... 00:16:55.681 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:55.681 22:28:09 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:55.681 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:16:55.681 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:55.681 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:55.681 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:55.681 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:55.681 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:55.681 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:55.681 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:55.681 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:55.681 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:55.681 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:55.681 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:16:55.681 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:16:55.681 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:55.681 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:55.681 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:55.681 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:55.681 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:55.681 22:28:09 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:55.681 22:28:09 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:55.681 22:28:09 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:55.682 Cannot find device "nvmf_tgt_br" 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:55.682 Cannot find device "nvmf_tgt_br2" 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:55.682 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:55.941 Cannot find device "nvmf_tgt_br" 00:16:55.941 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:16:55.941 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:55.941 Cannot find device "nvmf_tgt_br2" 00:16:55.941 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:16:55.941 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:55.941 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:55.941 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:55.941 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:55.941 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:16:55.941 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:55.941 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:55.941 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:16:55.941 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:55.941 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:55.941 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:55.941 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:55.941 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:55.941 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:55.941 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:55.941 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:55.941 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:55.941 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:55.941 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:55.941 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:55.941 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:55.941 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:55.942 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:55.942 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:55.942 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:55.942 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:55.942 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:55.942 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:56.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:16:56.201 00:16:56.201 --- 10.0.0.2 ping statistics --- 00:16:56.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.201 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:56.201 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:56.201 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:16:56.201 00:16:56.201 --- 10.0.0.3 ping statistics --- 00:16:56.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.201 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:56.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:16:56.201 00:16:56.201 --- 10.0.0.1 ping statistics --- 00:16:56.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.201 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:56.201 ************************************ 00:16:56.201 START TEST nvmf_digest_clean 00:16:56.201 ************************************ 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=79944 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 79944 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:56.201 22:28:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 79944 ']' 00:16:56.202 22:28:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.202 22:28:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:56.202 22:28:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.202 22:28:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:56.202 22:28:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:56.202 [2024-07-15 22:28:09.715008] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:16:56.202 [2024-07-15 22:28:09.715200] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.461 [2024-07-15 22:28:09.858582] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.461 [2024-07-15 22:28:09.949352] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.461 [2024-07-15 22:28:09.949585] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.461 [2024-07-15 22:28:09.949618] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.461 [2024-07-15 22:28:09.949628] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.461 [2024-07-15 22:28:09.949635] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.461 [2024-07-15 22:28:09.949664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.030 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:57.030 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:16:57.030 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:57.030 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:57.030 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:57.030 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.030 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:16:57.030 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:16:57.030 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:16:57.030 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.030 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:57.030 [2024-07-15 22:28:10.646782] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:57.289 null0 00:16:57.289 [2024-07-15 22:28:10.690235] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.289 [2024-07-15 22:28:10.714294] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.289 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.289 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:16:57.289 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:57.289 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:57.289 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:16:57.289 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:16:57.289 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:16:57.289 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:57.289 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79972 00:16:57.289 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:57.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:57.289 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79972 /var/tmp/bperf.sock 00:16:57.289 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 79972 ']' 00:16:57.289 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:57.289 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.289 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:57.289 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.289 22:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:57.289 [2024-07-15 22:28:10.774626] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:16:57.289 [2024-07-15 22:28:10.774830] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79972 ] 00:16:57.289 [2024-07-15 22:28:10.916940] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.550 [2024-07-15 22:28:11.010540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.118 22:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.118 22:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:16:58.118 22:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:58.118 22:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:58.118 22:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:58.375 [2024-07-15 22:28:11.895549] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:58.375 22:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:58.375 22:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:58.633 nvme0n1 00:16:58.633 22:28:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:58.633 22:28:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:58.892 Running I/O for 2 seconds... 00:17:00.793 00:17:00.793 Latency(us) 00:17:00.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.793 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:00.793 nvme0n1 : 2.01 19352.16 75.59 0.00 0.00 6609.85 6106.17 19897.68 00:17:00.793 =================================================================================================================== 00:17:00.793 Total : 19352.16 75.59 0.00 0.00 6609.85 6106.17 19897.68 00:17:00.793 0 00:17:00.793 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:00.793 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:00.793 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:00.793 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:00.793 | select(.opcode=="crc32c") 00:17:00.793 | "\(.module_name) \(.executed)"' 00:17:00.793 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:01.051 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:01.051 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:01.051 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:01.051 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:01.052 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79972 00:17:01.052 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 79972 ']' 00:17:01.052 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 79972 00:17:01.052 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:01.052 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:01.052 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79972 00:17:01.052 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:01.052 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:01.052 killing process with pid 79972 00:17:01.052 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79972' 00:17:01.052 Received shutdown signal, test time was about 2.000000 seconds 00:17:01.052 00:17:01.052 Latency(us) 00:17:01.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.052 =================================================================================================================== 00:17:01.052 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:01.052 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 79972 00:17:01.052 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 79972 00:17:01.309 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:01.309 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:01.309 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:01.309 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:01.309 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:01.309 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:01.309 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:01.309 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80032 00:17:01.309 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:01.309 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80032 /var/tmp/bperf.sock 00:17:01.309 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80032 ']' 00:17:01.309 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:01.309 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:01.309 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:01.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:01.309 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:01.309 22:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:01.309 [2024-07-15 22:28:14.820629] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:17:01.309 [2024-07-15 22:28:14.820859] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:17:01.309 Zero copy mechanism will not be used. 00:17:01.309 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80032 ] 00:17:01.566 [2024-07-15 22:28:14.951490] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.566 [2024-07-15 22:28:15.046934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.196 22:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:02.196 22:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:02.197 22:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:02.197 22:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:02.197 22:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:02.456 [2024-07-15 22:28:15.847126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:02.456 22:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:02.456 22:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:02.715 nvme0n1 00:17:02.715 22:28:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:02.715 22:28:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:02.715 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:02.715 Zero copy mechanism will not be used. 00:17:02.715 Running I/O for 2 seconds... 00:17:05.247 00:17:05.247 Latency(us) 00:17:05.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.247 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:05.247 nvme0n1 : 2.00 8663.58 1082.95 0.00 0.00 1844.09 1750.26 2987.28 00:17:05.247 =================================================================================================================== 00:17:05.247 Total : 8663.58 1082.95 0.00 0.00 1844.09 1750.26 2987.28 00:17:05.247 0 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:05.247 | select(.opcode=="crc32c") 00:17:05.247 | "\(.module_name) \(.executed)"' 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80032 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80032 ']' 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80032 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80032 00:17:05.247 killing process with pid 80032 00:17:05.247 Received shutdown signal, test time was about 2.000000 seconds 00:17:05.247 00:17:05.247 Latency(us) 00:17:05.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.247 =================================================================================================================== 00:17:05.247 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80032' 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80032 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80032 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80087 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80087 /var/tmp/bperf.sock 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80087 ']' 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:05.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:05.247 22:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:05.247 [2024-07-15 22:28:18.786868] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:17:05.247 [2024-07-15 22:28:18.787110] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80087 ] 00:17:05.505 [2024-07-15 22:28:18.918854] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.505 [2024-07-15 22:28:19.012417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.073 22:28:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:06.073 22:28:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:06.073 22:28:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:06.073 22:28:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:06.073 22:28:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:06.338 [2024-07-15 22:28:19.893120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:06.338 22:28:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:06.338 22:28:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:06.597 nvme0n1 00:17:06.597 22:28:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:06.597 22:28:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:06.856 Running I/O for 2 seconds... 00:17:08.761 00:17:08.761 Latency(us) 00:17:08.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.761 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:08.761 nvme0n1 : 2.00 20498.15 80.07 0.00 0.00 6239.58 6000.89 12054.41 00:17:08.761 =================================================================================================================== 00:17:08.761 Total : 20498.15 80.07 0.00 0.00 6239.58 6000.89 12054.41 00:17:08.761 0 00:17:08.761 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:08.761 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:08.761 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:08.761 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:08.761 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:08.761 | select(.opcode=="crc32c") 00:17:08.761 | "\(.module_name) \(.executed)"' 00:17:09.019 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:09.019 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:09.019 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:09.019 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:09.019 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80087 00:17:09.019 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80087 ']' 00:17:09.019 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80087 00:17:09.019 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:09.019 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:09.019 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80087 00:17:09.019 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:09.019 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:09.019 killing process with pid 80087 00:17:09.019 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80087' 00:17:09.019 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80087 00:17:09.019 Received shutdown signal, test time was about 2.000000 seconds 00:17:09.019 00:17:09.019 Latency(us) 00:17:09.019 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.019 =================================================================================================================== 00:17:09.019 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:09.019 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80087 00:17:09.279 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:09.279 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:09.279 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:09.279 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:09.279 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:09.279 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:09.279 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:09.279 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:09.279 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80147 00:17:09.279 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80147 /var/tmp/bperf.sock 00:17:09.279 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80147 ']' 00:17:09.279 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:09.279 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:09.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:09.279 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:09.279 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:09.279 22:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:09.279 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:09.279 Zero copy mechanism will not be used. 00:17:09.279 [2024-07-15 22:28:22.828367] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:17:09.279 [2024-07-15 22:28:22.828426] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80147 ] 00:17:09.538 [2024-07-15 22:28:22.971825] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.538 [2024-07-15 22:28:23.060166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.104 22:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:10.104 22:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:10.104 22:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:10.104 22:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:10.104 22:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:10.363 [2024-07-15 22:28:23.936418] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:10.363 22:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:10.363 22:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:10.622 nvme0n1 00:17:10.881 22:28:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:10.881 22:28:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:10.881 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:10.881 Zero copy mechanism will not be used. 00:17:10.881 Running I/O for 2 seconds... 00:17:12.795 00:17:12.795 Latency(us) 00:17:12.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.795 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:12.795 nvme0n1 : 2.00 8337.52 1042.19 0.00 0.00 1915.35 1177.81 3066.24 00:17:12.795 =================================================================================================================== 00:17:12.795 Total : 8337.52 1042.19 0.00 0.00 1915.35 1177.81 3066.24 00:17:12.795 0 00:17:12.795 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:12.795 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:12.795 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:12.795 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:12.795 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:12.795 | select(.opcode=="crc32c") 00:17:12.795 | "\(.module_name) \(.executed)"' 00:17:13.084 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:13.084 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:13.084 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:13.084 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:13.084 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80147 00:17:13.084 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80147 ']' 00:17:13.084 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80147 00:17:13.084 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:13.084 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:13.084 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80147 00:17:13.084 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:13.084 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:13.084 killing process with pid 80147 00:17:13.084 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80147' 00:17:13.084 Received shutdown signal, test time was about 2.000000 seconds 00:17:13.084 00:17:13.084 Latency(us) 00:17:13.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.084 =================================================================================================================== 00:17:13.084 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:13.084 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80147 00:17:13.084 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80147 00:17:13.343 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79944 00:17:13.343 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 79944 ']' 00:17:13.343 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 79944 00:17:13.343 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:13.343 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:13.343 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79944 00:17:13.343 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:13.343 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:13.343 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79944' 00:17:13.343 killing process with pid 79944 00:17:13.343 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 79944 00:17:13.343 22:28:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 79944 00:17:13.601 00:17:13.601 real 0m17.403s 00:17:13.601 user 0m32.349s 00:17:13.601 sys 0m5.156s 00:17:13.601 22:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:13.601 22:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:13.601 ************************************ 00:17:13.601 END TEST nvmf_digest_clean 00:17:13.601 ************************************ 00:17:13.601 22:28:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:17:13.601 22:28:27 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:17:13.601 22:28:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:13.601 22:28:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:13.601 22:28:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:13.601 ************************************ 00:17:13.601 START TEST nvmf_digest_error 00:17:13.601 ************************************ 00:17:13.601 22:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:17:13.601 22:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:17:13.601 22:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:13.601 22:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:13.601 22:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:13.601 22:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=80230 00:17:13.601 22:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 80230 00:17:13.601 22:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:13.601 22:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80230 ']' 00:17:13.601 22:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.601 22:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.601 22:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.601 22:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.601 22:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:13.601 [2024-07-15 22:28:27.199078] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:17:13.601 [2024-07-15 22:28:27.199143] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.859 [2024-07-15 22:28:27.329768] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.859 [2024-07-15 22:28:27.425252] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:13.859 [2024-07-15 22:28:27.425303] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:13.859 [2024-07-15 22:28:27.425312] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:13.859 [2024-07-15 22:28:27.425320] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:13.859 [2024-07-15 22:28:27.425327] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:13.859 [2024-07-15 22:28:27.425365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:14.794 [2024-07-15 22:28:28.124977] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:14.794 [2024-07-15 22:28:28.178337] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:14.794 null0 00:17:14.794 [2024-07-15 22:28:28.221080] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:14.794 [2024-07-15 22:28:28.245119] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80262 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80262 /var/tmp/bperf.sock 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80262 ']' 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:14.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:14.794 22:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:14.794 [2024-07-15 22:28:28.302954] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:17:14.794 [2024-07-15 22:28:28.303037] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80262 ] 00:17:15.052 [2024-07-15 22:28:28.445429] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.052 [2024-07-15 22:28:28.537569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.052 [2024-07-15 22:28:28.579185] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:15.620 22:28:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:15.620 22:28:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:15.620 22:28:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:15.620 22:28:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:15.878 22:28:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:15.878 22:28:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.878 22:28:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:15.878 22:28:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.878 22:28:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:15.878 22:28:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:16.136 nvme0n1 00:17:16.136 22:28:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:16.136 22:28:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.136 22:28:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:16.137 22:28:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.137 22:28:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:16.137 22:28:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:16.137 Running I/O for 2 seconds... 00:17:16.137 [2024-07-15 22:28:29.738309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.137 [2024-07-15 22:28:29.738368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.137 [2024-07-15 22:28:29.738381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.137 [2024-07-15 22:28:29.751447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.137 [2024-07-15 22:28:29.751496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.137 [2024-07-15 22:28:29.751508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.137 [2024-07-15 22:28:29.764628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.137 [2024-07-15 22:28:29.764670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.137 [2024-07-15 22:28:29.764682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.395 [2024-07-15 22:28:29.777654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.395 [2024-07-15 22:28:29.777693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.395 [2024-07-15 22:28:29.777705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.395 [2024-07-15 22:28:29.790733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.395 [2024-07-15 22:28:29.790774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.395 [2024-07-15 22:28:29.790787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.395 [2024-07-15 22:28:29.803775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.395 [2024-07-15 22:28:29.803816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.395 [2024-07-15 22:28:29.803828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.395 [2024-07-15 22:28:29.816856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.395 [2024-07-15 22:28:29.816899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.395 [2024-07-15 22:28:29.816910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.395 [2024-07-15 22:28:29.829968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.395 [2024-07-15 22:28:29.830010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.395 [2024-07-15 22:28:29.830022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.395 [2024-07-15 22:28:29.843053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.395 [2024-07-15 22:28:29.843096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.395 [2024-07-15 22:28:29.843107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.395 [2024-07-15 22:28:29.856077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.395 [2024-07-15 22:28:29.856116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.395 [2024-07-15 22:28:29.856127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.395 [2024-07-15 22:28:29.869120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.395 [2024-07-15 22:28:29.869158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.395 [2024-07-15 22:28:29.869168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.395 [2024-07-15 22:28:29.882162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.395 [2024-07-15 22:28:29.882200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.395 [2024-07-15 22:28:29.882211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.395 [2024-07-15 22:28:29.895215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.395 [2024-07-15 22:28:29.895253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.395 [2024-07-15 22:28:29.895264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.395 [2024-07-15 22:28:29.908279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.396 [2024-07-15 22:28:29.908322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.396 [2024-07-15 22:28:29.908333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.396 [2024-07-15 22:28:29.921326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.396 [2024-07-15 22:28:29.921371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.396 [2024-07-15 22:28:29.921383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.396 [2024-07-15 22:28:29.934728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.396 [2024-07-15 22:28:29.934763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.396 [2024-07-15 22:28:29.934775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.396 [2024-07-15 22:28:29.949545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.396 [2024-07-15 22:28:29.949620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.396 [2024-07-15 22:28:29.949642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.396 [2024-07-15 22:28:29.964259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.396 [2024-07-15 22:28:29.964303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.396 [2024-07-15 22:28:29.964315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.396 [2024-07-15 22:28:29.979025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.396 [2024-07-15 22:28:29.979069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.396 [2024-07-15 22:28:29.979081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.396 [2024-07-15 22:28:29.993234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.396 [2024-07-15 22:28:29.993277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.396 [2024-07-15 22:28:29.993289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.396 [2024-07-15 22:28:30.008046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.396 [2024-07-15 22:28:30.008113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.396 [2024-07-15 22:28:30.008135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.396 [2024-07-15 22:28:30.022284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.396 [2024-07-15 22:28:30.022336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.396 [2024-07-15 22:28:30.022350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.654 [2024-07-15 22:28:30.036536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.654 [2024-07-15 22:28:30.036583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.654 [2024-07-15 22:28:30.036607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.654 [2024-07-15 22:28:30.050878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.654 [2024-07-15 22:28:30.050926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.654 [2024-07-15 22:28:30.050938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.654 [2024-07-15 22:28:30.065084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.654 [2024-07-15 22:28:30.065134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.654 [2024-07-15 22:28:30.065147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.654 [2024-07-15 22:28:30.079686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.654 [2024-07-15 22:28:30.079731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.654 [2024-07-15 22:28:30.079745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.654 [2024-07-15 22:28:30.094145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.654 [2024-07-15 22:28:30.094197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.654 [2024-07-15 22:28:30.094212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.654 [2024-07-15 22:28:30.108445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.654 [2024-07-15 22:28:30.108486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.654 [2024-07-15 22:28:30.108498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.655 [2024-07-15 22:28:30.122965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.655 [2024-07-15 22:28:30.123009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.655 [2024-07-15 22:28:30.123022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.655 [2024-07-15 22:28:30.137452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.655 [2024-07-15 22:28:30.137508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.655 [2024-07-15 22:28:30.137525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.655 [2024-07-15 22:28:30.151737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.655 [2024-07-15 22:28:30.151782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.655 [2024-07-15 22:28:30.151795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.655 [2024-07-15 22:28:30.166261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.655 [2024-07-15 22:28:30.166311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.655 [2024-07-15 22:28:30.166323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.655 [2024-07-15 22:28:30.180452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.655 [2024-07-15 22:28:30.180494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.655 [2024-07-15 22:28:30.180505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.655 [2024-07-15 22:28:30.194709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.655 [2024-07-15 22:28:30.194768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.655 [2024-07-15 22:28:30.194780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.655 [2024-07-15 22:28:30.209714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.655 [2024-07-15 22:28:30.209760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.655 [2024-07-15 22:28:30.209772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.655 [2024-07-15 22:28:30.224024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.655 [2024-07-15 22:28:30.224067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.655 [2024-07-15 22:28:30.224079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.655 [2024-07-15 22:28:30.238498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.655 [2024-07-15 22:28:30.238544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.655 [2024-07-15 22:28:30.238556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.655 [2024-07-15 22:28:30.252809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.655 [2024-07-15 22:28:30.252852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.655 [2024-07-15 22:28:30.252864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.655 [2024-07-15 22:28:30.266817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.655 [2024-07-15 22:28:30.266861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.655 [2024-07-15 22:28:30.266873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.655 [2024-07-15 22:28:30.281589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.655 [2024-07-15 22:28:30.281642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.655 [2024-07-15 22:28:30.281655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.914 [2024-07-15 22:28:30.295904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.914 [2024-07-15 22:28:30.295944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.914 [2024-07-15 22:28:30.295956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.914 [2024-07-15 22:28:30.310567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.914 [2024-07-15 22:28:30.310620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.914 [2024-07-15 22:28:30.310632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.914 [2024-07-15 22:28:30.325067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.914 [2024-07-15 22:28:30.325107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.914 [2024-07-15 22:28:30.325119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.914 [2024-07-15 22:28:30.339372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.914 [2024-07-15 22:28:30.339422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.914 [2024-07-15 22:28:30.339435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.914 [2024-07-15 22:28:30.354013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.914 [2024-07-15 22:28:30.354060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.914 [2024-07-15 22:28:30.354072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.914 [2024-07-15 22:28:30.368476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.914 [2024-07-15 22:28:30.368525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.914 [2024-07-15 22:28:30.368538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.914 [2024-07-15 22:28:30.383181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.914 [2024-07-15 22:28:30.383227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.914 [2024-07-15 22:28:30.383239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.914 [2024-07-15 22:28:30.397677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.914 [2024-07-15 22:28:30.397723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.914 [2024-07-15 22:28:30.397736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.914 [2024-07-15 22:28:30.412231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.914 [2024-07-15 22:28:30.412274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.914 [2024-07-15 22:28:30.412285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.914 [2024-07-15 22:28:30.426814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.914 [2024-07-15 22:28:30.426855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.914 [2024-07-15 22:28:30.426867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.914 [2024-07-15 22:28:30.441261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.914 [2024-07-15 22:28:30.441304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.914 [2024-07-15 22:28:30.441316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.914 [2024-07-15 22:28:30.455720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.914 [2024-07-15 22:28:30.455759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.914 [2024-07-15 22:28:30.455770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.914 [2024-07-15 22:28:30.469760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.914 [2024-07-15 22:28:30.469813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.914 [2024-07-15 22:28:30.469824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.914 [2024-07-15 22:28:30.483695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.914 [2024-07-15 22:28:30.483732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.915 [2024-07-15 22:28:30.483742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.915 [2024-07-15 22:28:30.497713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.915 [2024-07-15 22:28:30.497751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.915 [2024-07-15 22:28:30.497762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.915 [2024-07-15 22:28:30.511792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.915 [2024-07-15 22:28:30.511844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.915 [2024-07-15 22:28:30.511855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.915 [2024-07-15 22:28:30.525877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.915 [2024-07-15 22:28:30.525919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.915 [2024-07-15 22:28:30.525930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.915 [2024-07-15 22:28:30.539780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:16.915 [2024-07-15 22:28:30.539820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.915 [2024-07-15 22:28:30.539833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.174 [2024-07-15 22:28:30.553696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.174 [2024-07-15 22:28:30.553739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.174 [2024-07-15 22:28:30.553751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.174 [2024-07-15 22:28:30.568014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.174 [2024-07-15 22:28:30.568064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.174 [2024-07-15 22:28:30.568078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.174 [2024-07-15 22:28:30.582264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.174 [2024-07-15 22:28:30.582316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.174 [2024-07-15 22:28:30.582329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.174 [2024-07-15 22:28:30.597049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.174 [2024-07-15 22:28:30.597115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.174 [2024-07-15 22:28:30.597128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.174 [2024-07-15 22:28:30.611590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.174 [2024-07-15 22:28:30.611647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.174 [2024-07-15 22:28:30.611660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.174 [2024-07-15 22:28:30.633052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.174 [2024-07-15 22:28:30.633114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.174 [2024-07-15 22:28:30.633127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.174 [2024-07-15 22:28:30.647767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.174 [2024-07-15 22:28:30.647827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.174 [2024-07-15 22:28:30.647840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.174 [2024-07-15 22:28:30.662313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.174 [2024-07-15 22:28:30.662368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.174 [2024-07-15 22:28:30.662381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.174 [2024-07-15 22:28:30.676644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.174 [2024-07-15 22:28:30.676714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.174 [2024-07-15 22:28:30.676728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.174 [2024-07-15 22:28:30.691215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.174 [2024-07-15 22:28:30.691268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.174 [2024-07-15 22:28:30.691281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.174 [2024-07-15 22:28:30.705555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.174 [2024-07-15 22:28:30.705636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.174 [2024-07-15 22:28:30.705650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.175 [2024-07-15 22:28:30.720357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.175 [2024-07-15 22:28:30.720428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.175 [2024-07-15 22:28:30.720441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.175 [2024-07-15 22:28:30.734955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.175 [2024-07-15 22:28:30.735025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.175 [2024-07-15 22:28:30.735039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.175 [2024-07-15 22:28:30.749496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.175 [2024-07-15 22:28:30.749575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.175 [2024-07-15 22:28:30.749590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.175 [2024-07-15 22:28:30.764474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.175 [2024-07-15 22:28:30.764553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.175 [2024-07-15 22:28:30.764567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.175 [2024-07-15 22:28:30.779120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.175 [2024-07-15 22:28:30.779172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.175 [2024-07-15 22:28:30.779186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.175 [2024-07-15 22:28:30.793714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.175 [2024-07-15 22:28:30.793765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.175 [2024-07-15 22:28:30.793778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.434 [2024-07-15 22:28:30.808040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.434 [2024-07-15 22:28:30.808104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.434 [2024-07-15 22:28:30.808117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.434 [2024-07-15 22:28:30.823037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.434 [2024-07-15 22:28:30.823093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.434 [2024-07-15 22:28:30.823106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.434 [2024-07-15 22:28:30.837953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.434 [2024-07-15 22:28:30.838010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.434 [2024-07-15 22:28:30.838025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.434 [2024-07-15 22:28:30.852433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.434 [2024-07-15 22:28:30.852488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.434 [2024-07-15 22:28:30.852501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.434 [2024-07-15 22:28:30.866997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.434 [2024-07-15 22:28:30.867049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.434 [2024-07-15 22:28:30.867062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.434 [2024-07-15 22:28:30.881539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.434 [2024-07-15 22:28:30.881591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.434 [2024-07-15 22:28:30.881615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.434 [2024-07-15 22:28:30.896083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.434 [2024-07-15 22:28:30.896136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.435 [2024-07-15 22:28:30.896151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.435 [2024-07-15 22:28:30.910474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.435 [2024-07-15 22:28:30.910530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.435 [2024-07-15 22:28:30.910546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.435 [2024-07-15 22:28:30.924976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.435 [2024-07-15 22:28:30.925038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.435 [2024-07-15 22:28:30.925051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.435 [2024-07-15 22:28:30.939574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.435 [2024-07-15 22:28:30.939631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.435 [2024-07-15 22:28:30.939645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.435 [2024-07-15 22:28:30.954186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.435 [2024-07-15 22:28:30.954234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.435 [2024-07-15 22:28:30.954249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.435 [2024-07-15 22:28:30.969092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.435 [2024-07-15 22:28:30.969138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.435 [2024-07-15 22:28:30.969151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.435 [2024-07-15 22:28:30.983826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.435 [2024-07-15 22:28:30.983887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.435 [2024-07-15 22:28:30.983905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.435 [2024-07-15 22:28:30.998380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.435 [2024-07-15 22:28:30.998432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.435 [2024-07-15 22:28:30.998444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.435 [2024-07-15 22:28:31.012713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.435 [2024-07-15 22:28:31.012765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.435 [2024-07-15 22:28:31.012785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.435 [2024-07-15 22:28:31.027134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.435 [2024-07-15 22:28:31.027182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.435 [2024-07-15 22:28:31.027211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.435 [2024-07-15 22:28:31.041534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.435 [2024-07-15 22:28:31.041587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.435 [2024-07-15 22:28:31.041608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.435 [2024-07-15 22:28:31.055631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.435 [2024-07-15 22:28:31.055699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.435 [2024-07-15 22:28:31.055712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.694 [2024-07-15 22:28:31.069742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.694 [2024-07-15 22:28:31.069788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.694 [2024-07-15 22:28:31.069799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.694 [2024-07-15 22:28:31.083993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.694 [2024-07-15 22:28:31.084041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.694 [2024-07-15 22:28:31.084054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.694 [2024-07-15 22:28:31.098210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.694 [2024-07-15 22:28:31.098258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.694 [2024-07-15 22:28:31.098270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.694 [2024-07-15 22:28:31.112496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.694 [2024-07-15 22:28:31.112544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.694 [2024-07-15 22:28:31.112556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.694 [2024-07-15 22:28:31.126581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.694 [2024-07-15 22:28:31.126637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.694 [2024-07-15 22:28:31.126649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.694 [2024-07-15 22:28:31.140954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.694 [2024-07-15 22:28:31.140995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.694 [2024-07-15 22:28:31.141006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.694 [2024-07-15 22:28:31.155165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.695 [2024-07-15 22:28:31.155208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.695 [2024-07-15 22:28:31.155221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.695 [2024-07-15 22:28:31.169223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.695 [2024-07-15 22:28:31.169282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.695 [2024-07-15 22:28:31.169293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.695 [2024-07-15 22:28:31.183274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.695 [2024-07-15 22:28:31.183320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.695 [2024-07-15 22:28:31.183332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.695 [2024-07-15 22:28:31.197730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.695 [2024-07-15 22:28:31.197775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.695 [2024-07-15 22:28:31.197787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.695 [2024-07-15 22:28:31.212025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.695 [2024-07-15 22:28:31.212067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.695 [2024-07-15 22:28:31.212080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.695 [2024-07-15 22:28:31.226179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.695 [2024-07-15 22:28:31.226223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.695 [2024-07-15 22:28:31.226235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.695 [2024-07-15 22:28:31.240457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.695 [2024-07-15 22:28:31.240521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.695 [2024-07-15 22:28:31.240533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.695 [2024-07-15 22:28:31.255016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.695 [2024-07-15 22:28:31.255060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.695 [2024-07-15 22:28:31.255072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.695 [2024-07-15 22:28:31.269590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.695 [2024-07-15 22:28:31.269641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.695 [2024-07-15 22:28:31.269653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.695 [2024-07-15 22:28:31.284041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.695 [2024-07-15 22:28:31.284085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.695 [2024-07-15 22:28:31.284096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.695 [2024-07-15 22:28:31.298697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.695 [2024-07-15 22:28:31.298743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.695 [2024-07-15 22:28:31.298754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.695 [2024-07-15 22:28:31.313258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.695 [2024-07-15 22:28:31.313303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.695 [2024-07-15 22:28:31.313313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.695 [2024-07-15 22:28:31.327642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.695 [2024-07-15 22:28:31.327687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.695 [2024-07-15 22:28:31.327699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.954 [2024-07-15 22:28:31.342113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.954 [2024-07-15 22:28:31.342170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.954 [2024-07-15 22:28:31.342183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.954 [2024-07-15 22:28:31.356814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.954 [2024-07-15 22:28:31.356871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.954 [2024-07-15 22:28:31.356884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.954 [2024-07-15 22:28:31.371411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.954 [2024-07-15 22:28:31.371459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.954 [2024-07-15 22:28:31.371471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.954 [2024-07-15 22:28:31.386114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.954 [2024-07-15 22:28:31.386162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.954 [2024-07-15 22:28:31.386174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.954 [2024-07-15 22:28:31.400568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.954 [2024-07-15 22:28:31.400625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.954 [2024-07-15 22:28:31.400638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.954 [2024-07-15 22:28:31.414996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.954 [2024-07-15 22:28:31.415040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.954 [2024-07-15 22:28:31.415052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.954 [2024-07-15 22:28:31.429728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.954 [2024-07-15 22:28:31.429774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.954 [2024-07-15 22:28:31.429786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.954 [2024-07-15 22:28:31.444128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.954 [2024-07-15 22:28:31.444200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.954 [2024-07-15 22:28:31.444214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.954 [2024-07-15 22:28:31.457799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.954 [2024-07-15 22:28:31.457846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.954 [2024-07-15 22:28:31.457860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.954 [2024-07-15 22:28:31.471464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.954 [2024-07-15 22:28:31.471508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.954 [2024-07-15 22:28:31.471520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.954 [2024-07-15 22:28:31.485717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.954 [2024-07-15 22:28:31.485765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.954 [2024-07-15 22:28:31.485795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.954 [2024-07-15 22:28:31.500329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.954 [2024-07-15 22:28:31.500375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.954 [2024-07-15 22:28:31.500388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.954 [2024-07-15 22:28:31.514946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.954 [2024-07-15 22:28:31.515007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.954 [2024-07-15 22:28:31.515018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.954 [2024-07-15 22:28:31.529505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.954 [2024-07-15 22:28:31.529551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.954 [2024-07-15 22:28:31.529564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.954 [2024-07-15 22:28:31.543821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.954 [2024-07-15 22:28:31.543871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.954 [2024-07-15 22:28:31.543884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.954 [2024-07-15 22:28:31.564218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.954 [2024-07-15 22:28:31.564270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.954 [2024-07-15 22:28:31.564295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.954 [2024-07-15 22:28:31.578531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:17.954 [2024-07-15 22:28:31.578576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.954 [2024-07-15 22:28:31.578589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.214 [2024-07-15 22:28:31.592748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:18.214 [2024-07-15 22:28:31.592796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.214 [2024-07-15 22:28:31.592808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.214 [2024-07-15 22:28:31.606328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:18.214 [2024-07-15 22:28:31.606376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.214 [2024-07-15 22:28:31.606389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.214 [2024-07-15 22:28:31.620366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:18.214 [2024-07-15 22:28:31.620416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.214 [2024-07-15 22:28:31.620428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.214 [2024-07-15 22:28:31.634654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:18.214 [2024-07-15 22:28:31.634702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.214 [2024-07-15 22:28:31.634714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.214 [2024-07-15 22:28:31.648991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:18.214 [2024-07-15 22:28:31.649041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.214 [2024-07-15 22:28:31.649053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.214 [2024-07-15 22:28:31.663498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:18.214 [2024-07-15 22:28:31.663549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.214 [2024-07-15 22:28:31.663580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.214 [2024-07-15 22:28:31.678047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:18.214 [2024-07-15 22:28:31.678098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.214 [2024-07-15 22:28:31.678127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.214 [2024-07-15 22:28:31.692978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:18.214 [2024-07-15 22:28:31.693028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.214 [2024-07-15 22:28:31.693041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.214 [2024-07-15 22:28:31.707484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2217e10) 00:17:18.214 [2024-07-15 22:28:31.707536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.214 [2024-07-15 22:28:31.707549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.214 00:17:18.214 Latency(us) 00:17:18.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.214 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:18.214 nvme0n1 : 2.00 17686.42 69.09 0.00 0.00 7232.17 6448.32 28425.25 00:17:18.214 =================================================================================================================== 00:17:18.214 Total : 17686.42 69.09 0.00 0.00 7232.17 6448.32 28425.25 00:17:18.214 0 00:17:18.214 22:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:18.214 22:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:18.214 22:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:18.214 22:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:18.214 | .driver_specific 00:17:18.214 | .nvme_error 00:17:18.214 | .status_code 00:17:18.214 | .command_transient_transport_error' 00:17:18.473 22:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 138 > 0 )) 00:17:18.473 22:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80262 00:17:18.473 22:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80262 ']' 00:17:18.473 22:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80262 00:17:18.473 22:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:18.473 22:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:18.473 22:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80262 00:17:18.473 22:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:18.473 killing process with pid 80262 00:17:18.473 22:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:18.473 22:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80262' 00:17:18.473 Received shutdown signal, test time was about 2.000000 seconds 00:17:18.473 00:17:18.473 Latency(us) 00:17:18.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.473 =================================================================================================================== 00:17:18.473 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:18.473 22:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80262 00:17:18.473 22:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80262 00:17:18.731 22:28:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:17:18.731 22:28:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:18.731 22:28:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:18.731 22:28:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:18.731 22:28:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:18.731 22:28:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80317 00:17:18.731 22:28:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80317 /var/tmp/bperf.sock 00:17:18.731 22:28:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:18.731 22:28:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80317 ']' 00:17:18.731 22:28:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:18.732 22:28:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:18.732 22:28:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:18.732 22:28:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.732 22:28:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:18.732 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:18.732 Zero copy mechanism will not be used. 00:17:18.732 [2024-07-15 22:28:32.236814] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:17:18.732 [2024-07-15 22:28:32.236886] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80317 ] 00:17:18.989 [2024-07-15 22:28:32.382068] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.989 [2024-07-15 22:28:32.483020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.989 [2024-07-15 22:28:32.526990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:19.557 22:28:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:19.557 22:28:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:19.557 22:28:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:19.557 22:28:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:19.813 22:28:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:19.813 22:28:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.813 22:28:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:19.813 22:28:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.813 22:28:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:19.813 22:28:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:20.071 nvme0n1 00:17:20.071 22:28:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:20.071 22:28:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.071 22:28:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:20.071 22:28:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.071 22:28:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:20.071 22:28:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:20.330 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:20.330 Zero copy mechanism will not be used. 00:17:20.330 Running I/O for 2 seconds... 00:17:20.330 [2024-07-15 22:28:33.735549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.330 [2024-07-15 22:28:33.735623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.330 [2024-07-15 22:28:33.735639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.330 [2024-07-15 22:28:33.739527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.330 [2024-07-15 22:28:33.739570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.330 [2024-07-15 22:28:33.739583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.330 [2024-07-15 22:28:33.743518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.330 [2024-07-15 22:28:33.743558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.330 [2024-07-15 22:28:33.743571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.330 [2024-07-15 22:28:33.747525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.330 [2024-07-15 22:28:33.747574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.330 [2024-07-15 22:28:33.747587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.330 [2024-07-15 22:28:33.751534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.330 [2024-07-15 22:28:33.751574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.330 [2024-07-15 22:28:33.751586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.330 [2024-07-15 22:28:33.755529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.330 [2024-07-15 22:28:33.755568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.330 [2024-07-15 22:28:33.755580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.330 [2024-07-15 22:28:33.759512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.330 [2024-07-15 22:28:33.759552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.330 [2024-07-15 22:28:33.759564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.330 [2024-07-15 22:28:33.763583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.330 [2024-07-15 22:28:33.763629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.330 [2024-07-15 22:28:33.763641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.330 [2024-07-15 22:28:33.767582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.330 [2024-07-15 22:28:33.767628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.330 [2024-07-15 22:28:33.767640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.330 [2024-07-15 22:28:33.771627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.330 [2024-07-15 22:28:33.771668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.330 [2024-07-15 22:28:33.771686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.330 [2024-07-15 22:28:33.775647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.330 [2024-07-15 22:28:33.775686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.330 [2024-07-15 22:28:33.775698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.330 [2024-07-15 22:28:33.779624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.330 [2024-07-15 22:28:33.779664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.330 [2024-07-15 22:28:33.779677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.330 [2024-07-15 22:28:33.783617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.330 [2024-07-15 22:28:33.783657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.330 [2024-07-15 22:28:33.783669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.330 [2024-07-15 22:28:33.787608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.330 [2024-07-15 22:28:33.787646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.330 [2024-07-15 22:28:33.787658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.330 [2024-07-15 22:28:33.791551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.330 [2024-07-15 22:28:33.791592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.330 [2024-07-15 22:28:33.791615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.330 [2024-07-15 22:28:33.795536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.330 [2024-07-15 22:28:33.795578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.330 [2024-07-15 22:28:33.795589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.330 [2024-07-15 22:28:33.799536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.330 [2024-07-15 22:28:33.799576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.330 [2024-07-15 22:28:33.799588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.330 [2024-07-15 22:28:33.803654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.330 [2024-07-15 22:28:33.803693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.330 [2024-07-15 22:28:33.803704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.330 [2024-07-15 22:28:33.807635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.330 [2024-07-15 22:28:33.807666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.330 [2024-07-15 22:28:33.807677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.330 [2024-07-15 22:28:33.811685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.330 [2024-07-15 22:28:33.811719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.811731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.815728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.815767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.815779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.819763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.819802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.819813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.823876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.823913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.823924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.827868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.827906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.827918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.831890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.831937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.831949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.835933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.835973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.835985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.839892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.839932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.839944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.843902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.843941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.843952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.847949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.847991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.848003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.851982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.852020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.852032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.856041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.856080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.856092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.860037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.860077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.860089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.864017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.864074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.864085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.868074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.868112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.868124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.872065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.872104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.872115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.876032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.876070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.876082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.880022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.880061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.880072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.884057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.884098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.884110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.888066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.888104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.888115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.892071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.892109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.892121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.896053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.896099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.896112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.900156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.900195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.900207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.904156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.904196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.904208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.908141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.908180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.908192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.912134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.912173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.912185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.916169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.916208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.916220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.920174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.920215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.920227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.924173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.331 [2024-07-15 22:28:33.924213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.331 [2024-07-15 22:28:33.924224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.331 [2024-07-15 22:28:33.928176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.332 [2024-07-15 22:28:33.928215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.332 [2024-07-15 22:28:33.928228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.332 [2024-07-15 22:28:33.932134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.332 [2024-07-15 22:28:33.932172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.332 [2024-07-15 22:28:33.932184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.332 [2024-07-15 22:28:33.936083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.332 [2024-07-15 22:28:33.936122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.332 [2024-07-15 22:28:33.936134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.332 [2024-07-15 22:28:33.940218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.332 [2024-07-15 22:28:33.940256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.332 [2024-07-15 22:28:33.940269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.332 [2024-07-15 22:28:33.944256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.332 [2024-07-15 22:28:33.944294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.332 [2024-07-15 22:28:33.944306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.332 [2024-07-15 22:28:33.948275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.332 [2024-07-15 22:28:33.948315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.332 [2024-07-15 22:28:33.948327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.332 [2024-07-15 22:28:33.952277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.332 [2024-07-15 22:28:33.952316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.332 [2024-07-15 22:28:33.952328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.332 [2024-07-15 22:28:33.956236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.332 [2024-07-15 22:28:33.956275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.332 [2024-07-15 22:28:33.956287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.332 [2024-07-15 22:28:33.960178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.332 [2024-07-15 22:28:33.960218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.332 [2024-07-15 22:28:33.960229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.593 [2024-07-15 22:28:33.964086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.593 [2024-07-15 22:28:33.964125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.593 [2024-07-15 22:28:33.964137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.593 [2024-07-15 22:28:33.968055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.593 [2024-07-15 22:28:33.968093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.593 [2024-07-15 22:28:33.968105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.593 [2024-07-15 22:28:33.972017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.593 [2024-07-15 22:28:33.972058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.593 [2024-07-15 22:28:33.972070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.593 [2024-07-15 22:28:33.975930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.593 [2024-07-15 22:28:33.975970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.593 [2024-07-15 22:28:33.975982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.593 [2024-07-15 22:28:33.979931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.593 [2024-07-15 22:28:33.979970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.593 [2024-07-15 22:28:33.979982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.593 [2024-07-15 22:28:33.983922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.593 [2024-07-15 22:28:33.983961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.593 [2024-07-15 22:28:33.983972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.593 [2024-07-15 22:28:33.987940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.593 [2024-07-15 22:28:33.987979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.593 [2024-07-15 22:28:33.987991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.593 [2024-07-15 22:28:33.991940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.593 [2024-07-15 22:28:33.991979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.593 [2024-07-15 22:28:33.991997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.593 [2024-07-15 22:28:33.995908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.593 [2024-07-15 22:28:33.995947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.593 [2024-07-15 22:28:33.995958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.593 [2024-07-15 22:28:33.999831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.593 [2024-07-15 22:28:33.999868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.593 [2024-07-15 22:28:33.999880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.593 [2024-07-15 22:28:34.003843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.593 [2024-07-15 22:28:34.003889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.593 [2024-07-15 22:28:34.003902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.593 [2024-07-15 22:28:34.007834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.593 [2024-07-15 22:28:34.007874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.593 [2024-07-15 22:28:34.007886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.593 [2024-07-15 22:28:34.011814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.593 [2024-07-15 22:28:34.011853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.593 [2024-07-15 22:28:34.011864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.593 [2024-07-15 22:28:34.015821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.593 [2024-07-15 22:28:34.015862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.593 [2024-07-15 22:28:34.015873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.593 [2024-07-15 22:28:34.019806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.593 [2024-07-15 22:28:34.019844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.593 [2024-07-15 22:28:34.019856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.023767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.023803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.023815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.027787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.027822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.027834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.031792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.031828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.031840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.035856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.035893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.035905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.039832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.039871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.039883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.043801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.043845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.043857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.047796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.047833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.047845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.051768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.051801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.051813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.055724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.055762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.055773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.059690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.059727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.059739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.063692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.063728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.063739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.067652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.067689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.067700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.071631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.071668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.071679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.075591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.075638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.075650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.079554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.079612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.079625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.083502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.083542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.083553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.087501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.087541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.087553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.091453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.091492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.091504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.095513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.095552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.095564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.099475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.099513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.099525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.103417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.103457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.103469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.107424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.107473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.107485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.111479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.111518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.111530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.115431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.115481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.115492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.119467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.119506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.119517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.123437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.123476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.123487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.127383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.127421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.127448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.131382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.131421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.594 [2024-07-15 22:28:34.131433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.594 [2024-07-15 22:28:34.135368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.594 [2024-07-15 22:28:34.135407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.595 [2024-07-15 22:28:34.135419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.595 [2024-07-15 22:28:34.139310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.595 [2024-07-15 22:28:34.139348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.595 [2024-07-15 22:28:34.139360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.595 [2024-07-15 22:28:34.143338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.595 [2024-07-15 22:28:34.143375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.595 [2024-07-15 22:28:34.143387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.595 [2024-07-15 22:28:34.147332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.595 [2024-07-15 22:28:34.147372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.595 [2024-07-15 22:28:34.147384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.595 [2024-07-15 22:28:34.151360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.595 [2024-07-15 22:28:34.151399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.595 [2024-07-15 22:28:34.151411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.595 [2024-07-15 22:28:34.155325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.595 [2024-07-15 22:28:34.155364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.595 [2024-07-15 22:28:34.155375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.595 [2024-07-15 22:28:34.159280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.595 [2024-07-15 22:28:34.159319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.595 [2024-07-15 22:28:34.159330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.595 [2024-07-15 22:28:34.163333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.595 [2024-07-15 22:28:34.163372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.595 [2024-07-15 22:28:34.163383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.595 [2024-07-15 22:28:34.167329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.595 [2024-07-15 22:28:34.167369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.595 [2024-07-15 22:28:34.167381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.595 [2024-07-15 22:28:34.171329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.595 [2024-07-15 22:28:34.171367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.595 [2024-07-15 22:28:34.171379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.595 [2024-07-15 22:28:34.175349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.595 [2024-07-15 22:28:34.175387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.595 [2024-07-15 22:28:34.175398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.595 [2024-07-15 22:28:34.179342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.595 [2024-07-15 22:28:34.179379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.595 [2024-07-15 22:28:34.179390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.595 [2024-07-15 22:28:34.183360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.595 [2024-07-15 22:28:34.183398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.595 [2024-07-15 22:28:34.183409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.595 [2024-07-15 22:28:34.187363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.595 [2024-07-15 22:28:34.187404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.595 [2024-07-15 22:28:34.187415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.595 [2024-07-15 22:28:34.191382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.595 [2024-07-15 22:28:34.191420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.595 [2024-07-15 22:28:34.191432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.595 [2024-07-15 22:28:34.195369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.595 [2024-07-15 22:28:34.195409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.595 [2024-07-15 22:28:34.195421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.595 [2024-07-15 22:28:34.199276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.595 [2024-07-15 22:28:34.199314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.595 [2024-07-15 22:28:34.199326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.595 [2024-07-15 22:28:34.203278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.595 [2024-07-15 22:28:34.203316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.595 [2024-07-15 22:28:34.203328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.595 [2024-07-15 22:28:34.207267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.595 [2024-07-15 22:28:34.207312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.595 [2024-07-15 22:28:34.207324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.595 [2024-07-15 22:28:34.211264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.595 [2024-07-15 22:28:34.211303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.595 [2024-07-15 22:28:34.211314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.595 [2024-07-15 22:28:34.215235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.595 [2024-07-15 22:28:34.215273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.595 [2024-07-15 22:28:34.215284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.595 [2024-07-15 22:28:34.219177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.595 [2024-07-15 22:28:34.219215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.595 [2024-07-15 22:28:34.219242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.595 [2024-07-15 22:28:34.223184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.595 [2024-07-15 22:28:34.223224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.595 [2024-07-15 22:28:34.223235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.856 [2024-07-15 22:28:34.227203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.856 [2024-07-15 22:28:34.227245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.856 [2024-07-15 22:28:34.227257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.856 [2024-07-15 22:28:34.231171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.856 [2024-07-15 22:28:34.231212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.856 [2024-07-15 22:28:34.231224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.856 [2024-07-15 22:28:34.235139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.856 [2024-07-15 22:28:34.235180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.856 [2024-07-15 22:28:34.235192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.856 [2024-07-15 22:28:34.239118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.856 [2024-07-15 22:28:34.239157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.856 [2024-07-15 22:28:34.239169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.856 [2024-07-15 22:28:34.243080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.856 [2024-07-15 22:28:34.243120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.856 [2024-07-15 22:28:34.243132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.856 [2024-07-15 22:28:34.247042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.856 [2024-07-15 22:28:34.247081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.856 [2024-07-15 22:28:34.247093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.856 [2024-07-15 22:28:34.250964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.856 [2024-07-15 22:28:34.251003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.856 [2024-07-15 22:28:34.251014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.856 [2024-07-15 22:28:34.254938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.856 [2024-07-15 22:28:34.254979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.856 [2024-07-15 22:28:34.254991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.856 [2024-07-15 22:28:34.258898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.856 [2024-07-15 22:28:34.258937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.856 [2024-07-15 22:28:34.258949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.856 [2024-07-15 22:28:34.262780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.856 [2024-07-15 22:28:34.262820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.856 [2024-07-15 22:28:34.262833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.856 [2024-07-15 22:28:34.266676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.856 [2024-07-15 22:28:34.266714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.856 [2024-07-15 22:28:34.266725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.856 [2024-07-15 22:28:34.270569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.856 [2024-07-15 22:28:34.270619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.856 [2024-07-15 22:28:34.270632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.856 [2024-07-15 22:28:34.274448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.856 [2024-07-15 22:28:34.274493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.856 [2024-07-15 22:28:34.274508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.856 [2024-07-15 22:28:34.278392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.856 [2024-07-15 22:28:34.278432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.856 [2024-07-15 22:28:34.278444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.856 [2024-07-15 22:28:34.282294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.856 [2024-07-15 22:28:34.282332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.856 [2024-07-15 22:28:34.282343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.856 [2024-07-15 22:28:34.286221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.856 [2024-07-15 22:28:34.286261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.856 [2024-07-15 22:28:34.286273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.856 [2024-07-15 22:28:34.290108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.856 [2024-07-15 22:28:34.290149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.856 [2024-07-15 22:28:34.290161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.856 [2024-07-15 22:28:34.294000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.856 [2024-07-15 22:28:34.294040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.856 [2024-07-15 22:28:34.294051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.856 [2024-07-15 22:28:34.297896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.856 [2024-07-15 22:28:34.297935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.856 [2024-07-15 22:28:34.297947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.856 [2024-07-15 22:28:34.301780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.856 [2024-07-15 22:28:34.301818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.856 [2024-07-15 22:28:34.301831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.856 [2024-07-15 22:28:34.305695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.856 [2024-07-15 22:28:34.305733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.856 [2024-07-15 22:28:34.305745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.856 [2024-07-15 22:28:34.309549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.856 [2024-07-15 22:28:34.309587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.856 [2024-07-15 22:28:34.309611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.313456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.857 [2024-07-15 22:28:34.313490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.857 [2024-07-15 22:28:34.313502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.317384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.857 [2024-07-15 22:28:34.317426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.857 [2024-07-15 22:28:34.317438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.321286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.857 [2024-07-15 22:28:34.321322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.857 [2024-07-15 22:28:34.321350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.325295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.857 [2024-07-15 22:28:34.325333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.857 [2024-07-15 22:28:34.325347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.329289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.857 [2024-07-15 22:28:34.329326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.857 [2024-07-15 22:28:34.329340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.333225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.857 [2024-07-15 22:28:34.333263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.857 [2024-07-15 22:28:34.333275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.337159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.857 [2024-07-15 22:28:34.337197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.857 [2024-07-15 22:28:34.337214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.341089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.857 [2024-07-15 22:28:34.341125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.857 [2024-07-15 22:28:34.341136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.345000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.857 [2024-07-15 22:28:34.345036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.857 [2024-07-15 22:28:34.345048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.348833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.857 [2024-07-15 22:28:34.348870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.857 [2024-07-15 22:28:34.348884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.352724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.857 [2024-07-15 22:28:34.352760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.857 [2024-07-15 22:28:34.352771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.356689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.857 [2024-07-15 22:28:34.356723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.857 [2024-07-15 22:28:34.356735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.360642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.857 [2024-07-15 22:28:34.360682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.857 [2024-07-15 22:28:34.360693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.364593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.857 [2024-07-15 22:28:34.364641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.857 [2024-07-15 22:28:34.364652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.368508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.857 [2024-07-15 22:28:34.368546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.857 [2024-07-15 22:28:34.368558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.372344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.857 [2024-07-15 22:28:34.372389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.857 [2024-07-15 22:28:34.372406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.376201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.857 [2024-07-15 22:28:34.376240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.857 [2024-07-15 22:28:34.376251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.380038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.857 [2024-07-15 22:28:34.380076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.857 [2024-07-15 22:28:34.380087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.383923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.857 [2024-07-15 22:28:34.383961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.857 [2024-07-15 22:28:34.383972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.387798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.857 [2024-07-15 22:28:34.387835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.857 [2024-07-15 22:28:34.387847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.391658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.857 [2024-07-15 22:28:34.391695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.857 [2024-07-15 22:28:34.391707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.395615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.857 [2024-07-15 22:28:34.395653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.857 [2024-07-15 22:28:34.395664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.399532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.857 [2024-07-15 22:28:34.399571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.857 [2024-07-15 22:28:34.399583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.403517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.857 [2024-07-15 22:28:34.403556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.857 [2024-07-15 22:28:34.403567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.407407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.857 [2024-07-15 22:28:34.407446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.857 [2024-07-15 22:28:34.407473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.411303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.857 [2024-07-15 22:28:34.411342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.857 [2024-07-15 22:28:34.411353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.857 [2024-07-15 22:28:34.415212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.858 [2024-07-15 22:28:34.415253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.858 [2024-07-15 22:28:34.415265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.858 [2024-07-15 22:28:34.419098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.858 [2024-07-15 22:28:34.419140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.858 [2024-07-15 22:28:34.419151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.858 [2024-07-15 22:28:34.423057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.858 [2024-07-15 22:28:34.423097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.858 [2024-07-15 22:28:34.423108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.858 [2024-07-15 22:28:34.426935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.858 [2024-07-15 22:28:34.426975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.858 [2024-07-15 22:28:34.426986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.858 [2024-07-15 22:28:34.430804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.858 [2024-07-15 22:28:34.430842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.858 [2024-07-15 22:28:34.430853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.858 [2024-07-15 22:28:34.434659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.858 [2024-07-15 22:28:34.434699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.858 [2024-07-15 22:28:34.434710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.858 [2024-07-15 22:28:34.438531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.858 [2024-07-15 22:28:34.438571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.858 [2024-07-15 22:28:34.438582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.858 [2024-07-15 22:28:34.442437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.858 [2024-07-15 22:28:34.442478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.858 [2024-07-15 22:28:34.442489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.858 [2024-07-15 22:28:34.446453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.858 [2024-07-15 22:28:34.446494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.858 [2024-07-15 22:28:34.446506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.858 [2024-07-15 22:28:34.450429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.858 [2024-07-15 22:28:34.450469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.858 [2024-07-15 22:28:34.450480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.858 [2024-07-15 22:28:34.454416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.858 [2024-07-15 22:28:34.454467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.858 [2024-07-15 22:28:34.454479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.858 [2024-07-15 22:28:34.458435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.858 [2024-07-15 22:28:34.458476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.858 [2024-07-15 22:28:34.458488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.858 [2024-07-15 22:28:34.462455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.858 [2024-07-15 22:28:34.462495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.858 [2024-07-15 22:28:34.462507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.858 [2024-07-15 22:28:34.466388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.858 [2024-07-15 22:28:34.466430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.858 [2024-07-15 22:28:34.466441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.858 [2024-07-15 22:28:34.470341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.858 [2024-07-15 22:28:34.470382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.858 [2024-07-15 22:28:34.470394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.858 [2024-07-15 22:28:34.474295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.858 [2024-07-15 22:28:34.474336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.858 [2024-07-15 22:28:34.474348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.858 [2024-07-15 22:28:34.478257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.858 [2024-07-15 22:28:34.478298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.858 [2024-07-15 22:28:34.478310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.858 [2024-07-15 22:28:34.482204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.858 [2024-07-15 22:28:34.482246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.858 [2024-07-15 22:28:34.482258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.858 [2024-07-15 22:28:34.486105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:20.858 [2024-07-15 22:28:34.486145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.858 [2024-07-15 22:28:34.486158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.119 [2024-07-15 22:28:34.490027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.119 [2024-07-15 22:28:34.490066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.119 [2024-07-15 22:28:34.490078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.119 [2024-07-15 22:28:34.493917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.119 [2024-07-15 22:28:34.493956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.119 [2024-07-15 22:28:34.493967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.119 [2024-07-15 22:28:34.497798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.119 [2024-07-15 22:28:34.497836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.119 [2024-07-15 22:28:34.497847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.119 [2024-07-15 22:28:34.501685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.119 [2024-07-15 22:28:34.501722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.119 [2024-07-15 22:28:34.501734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.119 [2024-07-15 22:28:34.505574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.119 [2024-07-15 22:28:34.505624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.119 [2024-07-15 22:28:34.505642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.119 [2024-07-15 22:28:34.509486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.119 [2024-07-15 22:28:34.509522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.119 [2024-07-15 22:28:34.509533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.119 [2024-07-15 22:28:34.513367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.119 [2024-07-15 22:28:34.513426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.119 [2024-07-15 22:28:34.513438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.119 [2024-07-15 22:28:34.517293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.119 [2024-07-15 22:28:34.517332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.119 [2024-07-15 22:28:34.517345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.119 [2024-07-15 22:28:34.521209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.119 [2024-07-15 22:28:34.521245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.119 [2024-07-15 22:28:34.521256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.119 [2024-07-15 22:28:34.525131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.119 [2024-07-15 22:28:34.525169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.119 [2024-07-15 22:28:34.525183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.119 [2024-07-15 22:28:34.529011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.119 [2024-07-15 22:28:34.529045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.119 [2024-07-15 22:28:34.529057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.119 [2024-07-15 22:28:34.532999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.119 [2024-07-15 22:28:34.533038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.119 [2024-07-15 22:28:34.533051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.119 [2024-07-15 22:28:34.536939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.119 [2024-07-15 22:28:34.536975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.119 [2024-07-15 22:28:34.536987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.119 [2024-07-15 22:28:34.540845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.119 [2024-07-15 22:28:34.540883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.119 [2024-07-15 22:28:34.540894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.119 [2024-07-15 22:28:34.544719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.119 [2024-07-15 22:28:34.544754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.119 [2024-07-15 22:28:34.544765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.119 [2024-07-15 22:28:34.548608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.119 [2024-07-15 22:28:34.548662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.119 [2024-07-15 22:28:34.548673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.119 [2024-07-15 22:28:34.552607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.119 [2024-07-15 22:28:34.552654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.119 [2024-07-15 22:28:34.552666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.119 [2024-07-15 22:28:34.556537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.556576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.556588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.560415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.560454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.560466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.564304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.564341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.564352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.568182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.568219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.568230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.572021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.572059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.572070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.575841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.575879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.575890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.579709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.579747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.579758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.583585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.583631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.583642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.587474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.587511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.587522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.591401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.591441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.591452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.595355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.595396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.595406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.599272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.599310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.599321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.603177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.603216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.603227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.607124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.607160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.607171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.611096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.611135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.611147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.615002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.615042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.615054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.618895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.618934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.618946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.622800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.622839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.622851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.626772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.626810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.626822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.630676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.630715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.630726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.634623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.634663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.634674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.638654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.638693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.638705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.642726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.642765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.642777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.646660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.646699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.646710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.650644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.650686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.650698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.654512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.654551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.654562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.658413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.658452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.658463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.662290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.662330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.662341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.666217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.120 [2024-07-15 22:28:34.666257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.120 [2024-07-15 22:28:34.666268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.120 [2024-07-15 22:28:34.670199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.121 [2024-07-15 22:28:34.670241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.121 [2024-07-15 22:28:34.670253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.121 [2024-07-15 22:28:34.674103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.121 [2024-07-15 22:28:34.674157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.121 [2024-07-15 22:28:34.674169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.121 [2024-07-15 22:28:34.678024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.121 [2024-07-15 22:28:34.678064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.121 [2024-07-15 22:28:34.678076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.121 [2024-07-15 22:28:34.682012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.121 [2024-07-15 22:28:34.682052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.121 [2024-07-15 22:28:34.682063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.121 [2024-07-15 22:28:34.686041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.121 [2024-07-15 22:28:34.686080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.121 [2024-07-15 22:28:34.686092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.121 [2024-07-15 22:28:34.690090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.121 [2024-07-15 22:28:34.690138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.121 [2024-07-15 22:28:34.690150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.121 [2024-07-15 22:28:34.694099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.121 [2024-07-15 22:28:34.694142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.121 [2024-07-15 22:28:34.694155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.121 [2024-07-15 22:28:34.698118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.121 [2024-07-15 22:28:34.698160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.121 [2024-07-15 22:28:34.698171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.121 [2024-07-15 22:28:34.702142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.121 [2024-07-15 22:28:34.702183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.121 [2024-07-15 22:28:34.702195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.121 [2024-07-15 22:28:34.706123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.121 [2024-07-15 22:28:34.706164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.121 [2024-07-15 22:28:34.706177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.121 [2024-07-15 22:28:34.710090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.121 [2024-07-15 22:28:34.710129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.121 [2024-07-15 22:28:34.710141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.121 [2024-07-15 22:28:34.714043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.121 [2024-07-15 22:28:34.714084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.121 [2024-07-15 22:28:34.714097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.121 [2024-07-15 22:28:34.717993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.121 [2024-07-15 22:28:34.718035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.121 [2024-07-15 22:28:34.718047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.121 [2024-07-15 22:28:34.721992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.121 [2024-07-15 22:28:34.722034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.121 [2024-07-15 22:28:34.722046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.121 [2024-07-15 22:28:34.726011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.121 [2024-07-15 22:28:34.726053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.121 [2024-07-15 22:28:34.726065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.121 [2024-07-15 22:28:34.730025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.121 [2024-07-15 22:28:34.730064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.121 [2024-07-15 22:28:34.730076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.121 [2024-07-15 22:28:34.733935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.121 [2024-07-15 22:28:34.733975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.121 [2024-07-15 22:28:34.733987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.121 [2024-07-15 22:28:34.737944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.121 [2024-07-15 22:28:34.737984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.121 [2024-07-15 22:28:34.737995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.121 [2024-07-15 22:28:34.741825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.121 [2024-07-15 22:28:34.741875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.121 [2024-07-15 22:28:34.741887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.121 [2024-07-15 22:28:34.745767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.121 [2024-07-15 22:28:34.745804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.121 [2024-07-15 22:28:34.745816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.121 [2024-07-15 22:28:34.749717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.121 [2024-07-15 22:28:34.749755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.121 [2024-07-15 22:28:34.749766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.381 [2024-07-15 22:28:34.753648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.381 [2024-07-15 22:28:34.753686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.381 [2024-07-15 22:28:34.753698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.381 [2024-07-15 22:28:34.757559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.381 [2024-07-15 22:28:34.757610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.381 [2024-07-15 22:28:34.757622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.381 [2024-07-15 22:28:34.761525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.381 [2024-07-15 22:28:34.761562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.381 [2024-07-15 22:28:34.761574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.381 [2024-07-15 22:28:34.765513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.381 [2024-07-15 22:28:34.765550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.381 [2024-07-15 22:28:34.765562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.381 [2024-07-15 22:28:34.769430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.381 [2024-07-15 22:28:34.769467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.381 [2024-07-15 22:28:34.769479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.381 [2024-07-15 22:28:34.773343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.381 [2024-07-15 22:28:34.773388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.381 [2024-07-15 22:28:34.773416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.381 [2024-07-15 22:28:34.777328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.381 [2024-07-15 22:28:34.777375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.381 [2024-07-15 22:28:34.777387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.381 [2024-07-15 22:28:34.781250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.781287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.781298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.785258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.785296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.785310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.789261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.789299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.789313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.793260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.793300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.793314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.797164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.797202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.797215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.801100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.801138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.801152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.805129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.805167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.805179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.809147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.809183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.809200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.813130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.813166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.813178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.817055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.817091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.817103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.820969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.821007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.821018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.824863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.824899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.824911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.828804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.828878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.828892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.832654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.832689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.832716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.836529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.836569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.836580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.840431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.840469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.840480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.844288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.844327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.844339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.848223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.848261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.848271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.852176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.852216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.852227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.856089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.856132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.856143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.860001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.860043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.860054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.863839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.863877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.863888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.867713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.867751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.867762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.871579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.871627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.871639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.875505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.875549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.875564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.879469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.879510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.879522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.883321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.883361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.883373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.887211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.887251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.887263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.382 [2024-07-15 22:28:34.891153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.382 [2024-07-15 22:28:34.891192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.382 [2024-07-15 22:28:34.891204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.895044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.895082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.895093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.898894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.898933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.898944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.902881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.902921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.902933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.906877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.906929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.906940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.910836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.910875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.910886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.914736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.914773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.914785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.918728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.918765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.918777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.922795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.922832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.922843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.926846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.926886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.926898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.930820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.930859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.930870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.934687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.934725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.934736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.938697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.938736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.938748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.942617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.942655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.942666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.946550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.946590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.946614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.950534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.950574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.950585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.954470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.954510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.954522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.958485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.958524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.958535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.962394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.962436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.962447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.966382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.966422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.966433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.970454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.970505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.970517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.974481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.974521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.974532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.978489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.978536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.978547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.982623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.982664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.982676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.986630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.986669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.986681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.990651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.990688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.990700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.994654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.994688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.994700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:34.998629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:34.998666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:34.998677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:35.002687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:35.002726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:35.002738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.383 [2024-07-15 22:28:35.006639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.383 [2024-07-15 22:28:35.006678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.383 [2024-07-15 22:28:35.006690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.384 [2024-07-15 22:28:35.010675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.384 [2024-07-15 22:28:35.010712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.384 [2024-07-15 22:28:35.010723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.646 [2024-07-15 22:28:35.014728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.646 [2024-07-15 22:28:35.014764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.646 [2024-07-15 22:28:35.014776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.646 [2024-07-15 22:28:35.018676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.646 [2024-07-15 22:28:35.018714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.646 [2024-07-15 22:28:35.018726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.646 [2024-07-15 22:28:35.022707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.646 [2024-07-15 22:28:35.022743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.646 [2024-07-15 22:28:35.022755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.646 [2024-07-15 22:28:35.026691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.646 [2024-07-15 22:28:35.026731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.646 [2024-07-15 22:28:35.026742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.646 [2024-07-15 22:28:35.030690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.646 [2024-07-15 22:28:35.030727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.646 [2024-07-15 22:28:35.030739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.646 [2024-07-15 22:28:35.034745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.646 [2024-07-15 22:28:35.034781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.646 [2024-07-15 22:28:35.034792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.646 [2024-07-15 22:28:35.038750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.646 [2024-07-15 22:28:35.038789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.646 [2024-07-15 22:28:35.038802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.646 [2024-07-15 22:28:35.042730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.646 [2024-07-15 22:28:35.042767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.646 [2024-07-15 22:28:35.042779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.646 [2024-07-15 22:28:35.046591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.646 [2024-07-15 22:28:35.046642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.646 [2024-07-15 22:28:35.046654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.646 [2024-07-15 22:28:35.050579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.646 [2024-07-15 22:28:35.050630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.646 [2024-07-15 22:28:35.050641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.646 [2024-07-15 22:28:35.054656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.646 [2024-07-15 22:28:35.054692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.646 [2024-07-15 22:28:35.054703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.646 [2024-07-15 22:28:35.058682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.646 [2024-07-15 22:28:35.058722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.646 [2024-07-15 22:28:35.058734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.646 [2024-07-15 22:28:35.062668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.646 [2024-07-15 22:28:35.062706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.646 [2024-07-15 22:28:35.062718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.646 [2024-07-15 22:28:35.066637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.646 [2024-07-15 22:28:35.066675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.646 [2024-07-15 22:28:35.066686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.646 [2024-07-15 22:28:35.070641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.646 [2024-07-15 22:28:35.070694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.646 [2024-07-15 22:28:35.070706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.646 [2024-07-15 22:28:35.074661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.646 [2024-07-15 22:28:35.074697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.646 [2024-07-15 22:28:35.074709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.646 [2024-07-15 22:28:35.078640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.646 [2024-07-15 22:28:35.078678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.646 [2024-07-15 22:28:35.078690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.646 [2024-07-15 22:28:35.082689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.646 [2024-07-15 22:28:35.082727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.646 [2024-07-15 22:28:35.082739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.646 [2024-07-15 22:28:35.086736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.646 [2024-07-15 22:28:35.086776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.646 [2024-07-15 22:28:35.086789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.646 [2024-07-15 22:28:35.090768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.646 [2024-07-15 22:28:35.090808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.646 [2024-07-15 22:28:35.090820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.646 [2024-07-15 22:28:35.094744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.646 [2024-07-15 22:28:35.094783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.094798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.098733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.098778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.098790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.102785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.102824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.102836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.106748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.106786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.106798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.110728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.110783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.110795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.114565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.114610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.114621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.118504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.118544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.118556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.122607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.122645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.122656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.126561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.126609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.126621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.130542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.130582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.130594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.134442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.134483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.134494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.138394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.138440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.138452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.142294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.142334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.142345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.146303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.146349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.146367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.150316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.150355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.150367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.154303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.154344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.154356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.158297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.158338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.158350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.162291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.162329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.162341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.166330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.166370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.166382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.170380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.170420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.170432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.174351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.174397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.174410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.178353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.178392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.178404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.182346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.182387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.182398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.186275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.186314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.186325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.190339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.190380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.190392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.194298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.194336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.194348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.198280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.198326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.198340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.202268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.202314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.202330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.206276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.206317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.647 [2024-07-15 22:28:35.206329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.647 [2024-07-15 22:28:35.210295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.647 [2024-07-15 22:28:35.210335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.648 [2024-07-15 22:28:35.210351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.648 [2024-07-15 22:28:35.214277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.648 [2024-07-15 22:28:35.214317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.648 [2024-07-15 22:28:35.214329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.648 [2024-07-15 22:28:35.218251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.648 [2024-07-15 22:28:35.218292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.648 [2024-07-15 22:28:35.218304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.648 [2024-07-15 22:28:35.222253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.648 [2024-07-15 22:28:35.222298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.648 [2024-07-15 22:28:35.222314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.648 [2024-07-15 22:28:35.226204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.648 [2024-07-15 22:28:35.226244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.648 [2024-07-15 22:28:35.226256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.648 [2024-07-15 22:28:35.230213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.648 [2024-07-15 22:28:35.230253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.648 [2024-07-15 22:28:35.230265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.648 [2024-07-15 22:28:35.234234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.648 [2024-07-15 22:28:35.234275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.648 [2024-07-15 22:28:35.234287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.648 [2024-07-15 22:28:35.238242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.648 [2024-07-15 22:28:35.238282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.648 [2024-07-15 22:28:35.238294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.648 [2024-07-15 22:28:35.242249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.648 [2024-07-15 22:28:35.242290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.648 [2024-07-15 22:28:35.242301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.648 [2024-07-15 22:28:35.246222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.648 [2024-07-15 22:28:35.246262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.648 [2024-07-15 22:28:35.246274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.648 [2024-07-15 22:28:35.250165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.648 [2024-07-15 22:28:35.250206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.648 [2024-07-15 22:28:35.250218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.648 [2024-07-15 22:28:35.254096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.648 [2024-07-15 22:28:35.254141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.648 [2024-07-15 22:28:35.254153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.648 [2024-07-15 22:28:35.258064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.648 [2024-07-15 22:28:35.258107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.648 [2024-07-15 22:28:35.258125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.648 [2024-07-15 22:28:35.262073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.648 [2024-07-15 22:28:35.262114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.648 [2024-07-15 22:28:35.262126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.648 [2024-07-15 22:28:35.266031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.648 [2024-07-15 22:28:35.266072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.648 [2024-07-15 22:28:35.266083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.648 [2024-07-15 22:28:35.269987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.648 [2024-07-15 22:28:35.270028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.648 [2024-07-15 22:28:35.270040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.648 [2024-07-15 22:28:35.273975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.648 [2024-07-15 22:28:35.274016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.648 [2024-07-15 22:28:35.274028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.919 [2024-07-15 22:28:35.277961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.919 [2024-07-15 22:28:35.278002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.919 [2024-07-15 22:28:35.278020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.919 [2024-07-15 22:28:35.281960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.919 [2024-07-15 22:28:35.282001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.919 [2024-07-15 22:28:35.282014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.919 [2024-07-15 22:28:35.285917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.919 [2024-07-15 22:28:35.285957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.919 [2024-07-15 22:28:35.285969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.919 [2024-07-15 22:28:35.289876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.919 [2024-07-15 22:28:35.289916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.289928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.293935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.293977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.293989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.297849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.297891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.297903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.302499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.302647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.302711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.311243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.311345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.311384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.318366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.318435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.318460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.324273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.324338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.324361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.328589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.328656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.328672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.333055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.333102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.333119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.337074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.337111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.337123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.341040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.341077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.341089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.345120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.345157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.345168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.349084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.349121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.349132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.353094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.353130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.353142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.357040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.357076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.357087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.360985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.361021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.361032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.364941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.364977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.364989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.368889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.368925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.368937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.372885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.372922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.372933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.376861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.376898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.376910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.380860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.380896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.380908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.384771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.384806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.384818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.388719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.388753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.388765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.392709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.392744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.392755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.396717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.396751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.396762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.400674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.400713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.400725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.404623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.404662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.404674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.408529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.920 [2024-07-15 22:28:35.408569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.920 [2024-07-15 22:28:35.408580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.920 [2024-07-15 22:28:35.412518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.412557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.412569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.416422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.416467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.416479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.420342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.420379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.420391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.424252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.424290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.424301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.428127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.428166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.428176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.432134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.432174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.432185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.436003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.436040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.436050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.439854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.439892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.439903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.443798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.443836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.443847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.447708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.447745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.447756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.451588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.451638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.451649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.455554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.455593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.455619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.459561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.459611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.459624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.463487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.463525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.463537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.467403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.467442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.467454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.471351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.471390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.471401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.475347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.475387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.475398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.479275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.479314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.479326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.483296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.483335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.483363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.487273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.487311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.487323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.491230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.491270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.491281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.495222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.495261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.495273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.499068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.499106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.499117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.502989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.503044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.503056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.506839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.506877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.506889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.510643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.510678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.510689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.514404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.514441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.514452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.518287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.518327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.518338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.921 [2024-07-15 22:28:35.522194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.921 [2024-07-15 22:28:35.522232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.921 [2024-07-15 22:28:35.522243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.922 [2024-07-15 22:28:35.526152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.922 [2024-07-15 22:28:35.526189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.922 [2024-07-15 22:28:35.526200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.922 [2024-07-15 22:28:35.530097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.922 [2024-07-15 22:28:35.530135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.922 [2024-07-15 22:28:35.530146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.922 [2024-07-15 22:28:35.534021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.922 [2024-07-15 22:28:35.534059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.922 [2024-07-15 22:28:35.534070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.922 [2024-07-15 22:28:35.537916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.922 [2024-07-15 22:28:35.537955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.922 [2024-07-15 22:28:35.537967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.922 [2024-07-15 22:28:35.541806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.922 [2024-07-15 22:28:35.541842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.922 [2024-07-15 22:28:35.541853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.922 [2024-07-15 22:28:35.545679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.922 [2024-07-15 22:28:35.545716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.922 [2024-07-15 22:28:35.545727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.922 [2024-07-15 22:28:35.549523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:21.922 [2024-07-15 22:28:35.549560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.922 [2024-07-15 22:28:35.549571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.181 [2024-07-15 22:28:35.553336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.181 [2024-07-15 22:28:35.553379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.181 [2024-07-15 22:28:35.553390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:22.181 [2024-07-15 22:28:35.557174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.181 [2024-07-15 22:28:35.557207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.181 [2024-07-15 22:28:35.557218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:22.181 [2024-07-15 22:28:35.560977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.181 [2024-07-15 22:28:35.561011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.181 [2024-07-15 22:28:35.561023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:22.181 [2024-07-15 22:28:35.564877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.181 [2024-07-15 22:28:35.564913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.181 [2024-07-15 22:28:35.564924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.181 [2024-07-15 22:28:35.568760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.181 [2024-07-15 22:28:35.568795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.181 [2024-07-15 22:28:35.568807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:22.181 [2024-07-15 22:28:35.572652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.181 [2024-07-15 22:28:35.572688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.181 [2024-07-15 22:28:35.572699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:22.181 [2024-07-15 22:28:35.576509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.181 [2024-07-15 22:28:35.576545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.181 [2024-07-15 22:28:35.576557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:22.181 [2024-07-15 22:28:35.580401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.181 [2024-07-15 22:28:35.580438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.181 [2024-07-15 22:28:35.580450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.181 [2024-07-15 22:28:35.584311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.181 [2024-07-15 22:28:35.584349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.181 [2024-07-15 22:28:35.584360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:22.181 [2024-07-15 22:28:35.588240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.181 [2024-07-15 22:28:35.588278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.181 [2024-07-15 22:28:35.588290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:22.181 [2024-07-15 22:28:35.592129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.181 [2024-07-15 22:28:35.592171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.181 [2024-07-15 22:28:35.592183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:22.181 [2024-07-15 22:28:35.596107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.181 [2024-07-15 22:28:35.596147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.181 [2024-07-15 22:28:35.596159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.181 [2024-07-15 22:28:35.600052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.181 [2024-07-15 22:28:35.600092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.181 [2024-07-15 22:28:35.600104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:22.181 [2024-07-15 22:28:35.603961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.181 [2024-07-15 22:28:35.603999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.181 [2024-07-15 22:28:35.604010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:22.181 [2024-07-15 22:28:35.607884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.607923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.607935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.611766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.611802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.611813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.615623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.615658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.615669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.619441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.619485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.619496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.623304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.623342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.623352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.627143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.627180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.627191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.630958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.630995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.631006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.634835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.634872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.634884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.638715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.638751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.638762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.642570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.642629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.642645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.646465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.646504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.646515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.650378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.650417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.650428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.654335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.654374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.654385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.658235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.658274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.658285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.662092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.662130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.662142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.665986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.666026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.666037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.669865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.669904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.669915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.673735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.673771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.673783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.677607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.677643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.677655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.681456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.681490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.681501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.685281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.685313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.685325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.689139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.689172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.689183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.692958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.692992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.693002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.696789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.696823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.696835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.700574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.700622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.700634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.704398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.704435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.704447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.708289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.708326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.708338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:22.182 [2024-07-15 22:28:35.712177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.182 [2024-07-15 22:28:35.712217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.182 [2024-07-15 22:28:35.712229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:22.183 [2024-07-15 22:28:35.716108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.183 [2024-07-15 22:28:35.716147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.183 [2024-07-15 22:28:35.716158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:22.183 [2024-07-15 22:28:35.719972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.183 [2024-07-15 22:28:35.720011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.183 [2024-07-15 22:28:35.720022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.183 [2024-07-15 22:28:35.723806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9f30) 00:17:22.183 [2024-07-15 22:28:35.723859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.183 [2024-07-15 22:28:35.723870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:22.183 00:17:22.183 Latency(us) 00:17:22.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.183 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:22.183 nvme0n1 : 2.00 7757.53 969.69 0.00 0.00 2059.37 1802.90 8527.58 00:17:22.183 =================================================================================================================== 00:17:22.183 Total : 7757.53 969.69 0.00 0.00 2059.37 1802.90 8527.58 00:17:22.183 0 00:17:22.183 22:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:22.183 22:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:22.183 22:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:22.183 22:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:22.183 | .driver_specific 00:17:22.183 | .nvme_error 00:17:22.183 | .status_code 00:17:22.183 | .command_transient_transport_error' 00:17:22.441 22:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 501 > 0 )) 00:17:22.441 22:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80317 00:17:22.441 22:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80317 ']' 00:17:22.441 22:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80317 00:17:22.441 22:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:22.441 22:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:22.441 22:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80317 00:17:22.441 22:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:22.441 killing process with pid 80317 00:17:22.441 22:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:22.441 22:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80317' 00:17:22.441 Received shutdown signal, test time was about 2.000000 seconds 00:17:22.441 00:17:22.441 Latency(us) 00:17:22.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.441 =================================================================================================================== 00:17:22.441 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:22.441 22:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80317 00:17:22.441 22:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80317 00:17:22.700 22:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:17:22.700 22:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:22.700 22:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:22.700 22:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:22.700 22:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:22.700 22:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:22.700 22:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80377 00:17:22.700 22:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80377 /var/tmp/bperf.sock 00:17:22.700 22:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80377 ']' 00:17:22.700 22:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:22.700 22:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:22.700 22:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:22.700 22:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.700 22:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:22.700 [2024-07-15 22:28:36.260975] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:17:22.700 [2024-07-15 22:28:36.261043] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80377 ] 00:17:22.958 [2024-07-15 22:28:36.393268] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.958 [2024-07-15 22:28:36.489695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.958 [2024-07-15 22:28:36.531519] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:23.524 22:28:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:23.524 22:28:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:23.524 22:28:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:23.524 22:28:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:23.781 22:28:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:23.781 22:28:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.781 22:28:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:23.782 22:28:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.782 22:28:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:23.782 22:28:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:24.040 nvme0n1 00:17:24.040 22:28:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:24.040 22:28:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.040 22:28:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:24.040 22:28:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.040 22:28:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:24.040 22:28:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:24.298 Running I/O for 2 seconds... 00:17:24.298 [2024-07-15 22:28:37.736840] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190fef90 00:17:24.298 [2024-07-15 22:28:37.739017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.298 [2024-07-15 22:28:37.739060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.298 [2024-07-15 22:28:37.750120] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190feb58 00:17:24.298 [2024-07-15 22:28:37.752249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.298 [2024-07-15 22:28:37.752285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:24.298 [2024-07-15 22:28:37.763194] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190fe2e8 00:17:24.298 [2024-07-15 22:28:37.765242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.298 [2024-07-15 22:28:37.765276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:24.298 [2024-07-15 22:28:37.776168] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190fda78 00:17:24.298 [2024-07-15 22:28:37.778249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.298 [2024-07-15 22:28:37.778285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:24.298 [2024-07-15 22:28:37.789253] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190fd208 00:17:24.298 [2024-07-15 22:28:37.791331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.298 [2024-07-15 22:28:37.791364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:24.298 [2024-07-15 22:28:37.802572] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190fc998 00:17:24.298 [2024-07-15 22:28:37.804648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.298 [2024-07-15 22:28:37.804683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:24.298 [2024-07-15 22:28:37.815967] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190fc128 00:17:24.298 [2024-07-15 22:28:37.818013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.298 [2024-07-15 22:28:37.818051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:24.298 [2024-07-15 22:28:37.829115] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190fb8b8 00:17:24.298 [2024-07-15 22:28:37.831144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.298 [2024-07-15 22:28:37.831179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:24.298 [2024-07-15 22:28:37.842202] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190fb048 00:17:24.298 [2024-07-15 22:28:37.844150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.298 [2024-07-15 22:28:37.844179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:24.298 [2024-07-15 22:28:37.855093] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190fa7d8 00:17:24.298 [2024-07-15 22:28:37.857082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.298 [2024-07-15 22:28:37.857111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:24.298 [2024-07-15 22:28:37.868250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f9f68 00:17:24.298 [2024-07-15 22:28:37.870243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.298 [2024-07-15 22:28:37.870279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:24.298 [2024-07-15 22:28:37.881423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f96f8 00:17:24.298 [2024-07-15 22:28:37.883373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.298 [2024-07-15 22:28:37.883402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:24.298 [2024-07-15 22:28:37.894604] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f8e88 00:17:24.298 [2024-07-15 22:28:37.896559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.298 [2024-07-15 22:28:37.896594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:24.298 [2024-07-15 22:28:37.907908] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f8618 00:17:24.298 [2024-07-15 22:28:37.909848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.298 [2024-07-15 22:28:37.909882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:24.298 [2024-07-15 22:28:37.921264] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f7da8 00:17:24.298 [2024-07-15 22:28:37.923186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.299 [2024-07-15 22:28:37.923214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:24.557 [2024-07-15 22:28:37.934682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f7538 00:17:24.557 [2024-07-15 22:28:37.936576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.557 [2024-07-15 22:28:37.936613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:24.557 [2024-07-15 22:28:37.947988] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f6cc8 00:17:24.557 [2024-07-15 22:28:37.949909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.557 [2024-07-15 22:28:37.949942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.557 [2024-07-15 22:28:37.961074] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f6458 00:17:24.557 [2024-07-15 22:28:37.962885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.557 [2024-07-15 22:28:37.962914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:24.557 [2024-07-15 22:28:37.974055] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f5be8 00:17:24.557 [2024-07-15 22:28:37.975838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.557 [2024-07-15 22:28:37.975869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:24.557 [2024-07-15 22:28:37.987331] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f5378 00:17:24.557 [2024-07-15 22:28:37.989167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.557 [2024-07-15 22:28:37.989200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:24.557 [2024-07-15 22:28:38.000633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f4b08 00:17:24.557 [2024-07-15 22:28:38.002427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.557 [2024-07-15 22:28:38.002461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:24.557 [2024-07-15 22:28:38.013381] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f4298 00:17:24.557 [2024-07-15 22:28:38.015120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.557 [2024-07-15 22:28:38.015150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:24.557 [2024-07-15 22:28:38.025915] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f3a28 00:17:24.557 [2024-07-15 22:28:38.027650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.558 [2024-07-15 22:28:38.027708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:24.558 [2024-07-15 22:28:38.038916] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f31b8 00:17:24.558 [2024-07-15 22:28:38.040689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.558 [2024-07-15 22:28:38.040720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:24.558 [2024-07-15 22:28:38.052047] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f2948 00:17:24.558 [2024-07-15 22:28:38.053796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.558 [2024-07-15 22:28:38.053830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:24.558 [2024-07-15 22:28:38.065203] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f20d8 00:17:24.558 [2024-07-15 22:28:38.066922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.558 [2024-07-15 22:28:38.066955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:24.558 [2024-07-15 22:28:38.078162] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f1868 00:17:24.558 [2024-07-15 22:28:38.079814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.558 [2024-07-15 22:28:38.079845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:24.558 [2024-07-15 22:28:38.091082] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f0ff8 00:17:24.558 [2024-07-15 22:28:38.092718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.558 [2024-07-15 22:28:38.092748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:24.558 [2024-07-15 22:28:38.104055] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f0788 00:17:24.558 [2024-07-15 22:28:38.105755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.558 [2024-07-15 22:28:38.105790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:24.558 [2024-07-15 22:28:38.117136] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190eff18 00:17:24.558 [2024-07-15 22:28:38.118752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.558 [2024-07-15 22:28:38.118786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:24.558 [2024-07-15 22:28:38.129987] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190ef6a8 00:17:24.558 [2024-07-15 22:28:38.131567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.558 [2024-07-15 22:28:38.131606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:24.558 [2024-07-15 22:28:38.142952] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190eee38 00:17:24.558 [2024-07-15 22:28:38.144520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.558 [2024-07-15 22:28:38.144553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:24.558 [2024-07-15 22:28:38.156002] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190ee5c8 00:17:24.558 [2024-07-15 22:28:38.157622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.558 [2024-07-15 22:28:38.157656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.558 [2024-07-15 22:28:38.169439] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190edd58 00:17:24.558 [2024-07-15 22:28:38.171039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.558 [2024-07-15 22:28:38.171072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:24.558 [2024-07-15 22:28:38.182824] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190ed4e8 00:17:24.558 [2024-07-15 22:28:38.184396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.558 [2024-07-15 22:28:38.184431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:24.817 [2024-07-15 22:28:38.196100] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190ecc78 00:17:24.817 [2024-07-15 22:28:38.197649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.817 [2024-07-15 22:28:38.197694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:24.817 [2024-07-15 22:28:38.209117] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190ec408 00:17:24.817 [2024-07-15 22:28:38.210639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.817 [2024-07-15 22:28:38.210673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:24.817 [2024-07-15 22:28:38.222016] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190ebb98 00:17:24.817 [2024-07-15 22:28:38.223495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.817 [2024-07-15 22:28:38.223527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:24.817 [2024-07-15 22:28:38.235153] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190eb328 00:17:24.817 [2024-07-15 22:28:38.236689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.817 [2024-07-15 22:28:38.236721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:24.817 [2024-07-15 22:28:38.248528] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190eaab8 00:17:24.817 [2024-07-15 22:28:38.250060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.817 [2024-07-15 22:28:38.250095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:24.817 [2024-07-15 22:28:38.261660] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190ea248 00:17:24.817 [2024-07-15 22:28:38.263086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.817 [2024-07-15 22:28:38.263119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:24.817 [2024-07-15 22:28:38.274587] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e99d8 00:17:24.817 [2024-07-15 22:28:38.276048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.817 [2024-07-15 22:28:38.276081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:24.817 [2024-07-15 22:28:38.287993] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e9168 00:17:24.817 [2024-07-15 22:28:38.289445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.818 [2024-07-15 22:28:38.289479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:24.818 [2024-07-15 22:28:38.301431] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e88f8 00:17:24.818 [2024-07-15 22:28:38.302865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.818 [2024-07-15 22:28:38.302899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:24.818 [2024-07-15 22:28:38.314752] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e8088 00:17:24.818 [2024-07-15 22:28:38.316140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.818 [2024-07-15 22:28:38.316175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:24.818 [2024-07-15 22:28:38.328009] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e7818 00:17:24.818 [2024-07-15 22:28:38.329394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.818 [2024-07-15 22:28:38.329453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:24.818 [2024-07-15 22:28:38.341294] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e6fa8 00:17:24.818 [2024-07-15 22:28:38.342715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.818 [2024-07-15 22:28:38.342748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:24.818 [2024-07-15 22:28:38.354623] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e6738 00:17:24.818 [2024-07-15 22:28:38.355987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.818 [2024-07-15 22:28:38.356020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:24.818 [2024-07-15 22:28:38.367733] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e5ec8 00:17:24.818 [2024-07-15 22:28:38.369040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.818 [2024-07-15 22:28:38.369073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.818 [2024-07-15 22:28:38.380813] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e5658 00:17:24.818 [2024-07-15 22:28:38.382105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.818 [2024-07-15 22:28:38.382140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:24.818 [2024-07-15 22:28:38.393917] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e4de8 00:17:24.818 [2024-07-15 22:28:38.395248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.818 [2024-07-15 22:28:38.395282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:24.818 [2024-07-15 22:28:38.407334] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e4578 00:17:24.818 [2024-07-15 22:28:38.408638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.818 [2024-07-15 22:28:38.408671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:24.818 [2024-07-15 22:28:38.420790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e3d08 00:17:24.818 [2024-07-15 22:28:38.422079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.818 [2024-07-15 22:28:38.422115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:24.818 [2024-07-15 22:28:38.434086] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e3498 00:17:24.818 [2024-07-15 22:28:38.435314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.818 [2024-07-15 22:28:38.435348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:24.818 [2024-07-15 22:28:38.447269] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e2c28 00:17:24.818 [2024-07-15 22:28:38.448478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:24.818 [2024-07-15 22:28:38.448512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:25.076 [2024-07-15 22:28:38.460400] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e23b8 00:17:25.076 [2024-07-15 22:28:38.461675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.076 [2024-07-15 22:28:38.461706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:25.076 [2024-07-15 22:28:38.473545] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e1b48 00:17:25.076 [2024-07-15 22:28:38.474729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.076 [2024-07-15 22:28:38.474762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:25.076 [2024-07-15 22:28:38.486590] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e12d8 00:17:25.076 [2024-07-15 22:28:38.487772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.076 [2024-07-15 22:28:38.487805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:25.076 [2024-07-15 22:28:38.499643] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e0a68 00:17:25.076 [2024-07-15 22:28:38.500783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.076 [2024-07-15 22:28:38.500815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:25.076 [2024-07-15 22:28:38.512889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e01f8 00:17:25.076 [2024-07-15 22:28:38.514107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.076 [2024-07-15 22:28:38.514143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:25.076 [2024-07-15 22:28:38.526105] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190df988 00:17:25.076 [2024-07-15 22:28:38.527212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.076 [2024-07-15 22:28:38.527245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:25.076 [2024-07-15 22:28:38.539018] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190df118 00:17:25.076 [2024-07-15 22:28:38.540117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.076 [2024-07-15 22:28:38.540150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:25.076 [2024-07-15 22:28:38.551903] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190de8a8 00:17:25.076 [2024-07-15 22:28:38.552991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.076 [2024-07-15 22:28:38.553025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:25.076 [2024-07-15 22:28:38.565005] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190de038 00:17:25.076 [2024-07-15 22:28:38.566090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.076 [2024-07-15 22:28:38.566127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:25.076 [2024-07-15 22:28:38.583523] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190de038 00:17:25.076 [2024-07-15 22:28:38.585679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.076 [2024-07-15 22:28:38.585715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.076 [2024-07-15 22:28:38.596580] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190de8a8 00:17:25.076 [2024-07-15 22:28:38.598711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.076 [2024-07-15 22:28:38.598745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:25.076 [2024-07-15 22:28:38.609541] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190df118 00:17:25.076 [2024-07-15 22:28:38.611638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.076 [2024-07-15 22:28:38.611677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:25.076 [2024-07-15 22:28:38.622521] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190df988 00:17:25.076 [2024-07-15 22:28:38.624574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.076 [2024-07-15 22:28:38.624612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:25.076 [2024-07-15 22:28:38.635339] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e01f8 00:17:25.076 [2024-07-15 22:28:38.637396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.076 [2024-07-15 22:28:38.637430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:25.076 [2024-07-15 22:28:38.648571] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e0a68 00:17:25.076 [2024-07-15 22:28:38.650643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.076 [2024-07-15 22:28:38.650679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:25.076 [2024-07-15 22:28:38.661758] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e12d8 00:17:25.076 [2024-07-15 22:28:38.663802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.076 [2024-07-15 22:28:38.663834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:25.076 [2024-07-15 22:28:38.675034] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e1b48 00:17:25.076 [2024-07-15 22:28:38.677055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.076 [2024-07-15 22:28:38.677088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:25.076 [2024-07-15 22:28:38.688364] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e23b8 00:17:25.076 [2024-07-15 22:28:38.690385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.076 [2024-07-15 22:28:38.690421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:25.076 [2024-07-15 22:28:38.701747] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e2c28 00:17:25.076 [2024-07-15 22:28:38.703722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.076 [2024-07-15 22:28:38.703755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:25.334 [2024-07-15 22:28:38.715116] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e3498 00:17:25.334 [2024-07-15 22:28:38.717098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.334 [2024-07-15 22:28:38.717130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:25.334 [2024-07-15 22:28:38.728420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e3d08 00:17:25.334 [2024-07-15 22:28:38.730357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.334 [2024-07-15 22:28:38.730394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:25.334 [2024-07-15 22:28:38.741562] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e4578 00:17:25.334 [2024-07-15 22:28:38.743436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.334 [2024-07-15 22:28:38.743470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:25.334 [2024-07-15 22:28:38.754552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e4de8 00:17:25.334 [2024-07-15 22:28:38.756405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.334 [2024-07-15 22:28:38.756438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:25.334 [2024-07-15 22:28:38.767774] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e5658 00:17:25.334 [2024-07-15 22:28:38.769697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.334 [2024-07-15 22:28:38.769734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:25.334 [2024-07-15 22:28:38.780929] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e5ec8 00:17:25.334 [2024-07-15 22:28:38.782770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.334 [2024-07-15 22:28:38.782804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:25.334 [2024-07-15 22:28:38.794290] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e6738 00:17:25.334 [2024-07-15 22:28:38.796167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.334 [2024-07-15 22:28:38.796197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:25.334 [2024-07-15 22:28:38.807621] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e6fa8 00:17:25.334 [2024-07-15 22:28:38.809495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.334 [2024-07-15 22:28:38.809528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:25.334 [2024-07-15 22:28:38.820964] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e7818 00:17:25.334 [2024-07-15 22:28:38.822817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.334 [2024-07-15 22:28:38.822850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:25.334 [2024-07-15 22:28:38.834436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e8088 00:17:25.334 [2024-07-15 22:28:38.836266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.334 [2024-07-15 22:28:38.836299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:25.334 [2024-07-15 22:28:38.847677] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e88f8 00:17:25.334 [2024-07-15 22:28:38.849429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.334 [2024-07-15 22:28:38.849463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:25.334 [2024-07-15 22:28:38.860673] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e9168 00:17:25.334 [2024-07-15 22:28:38.862486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.334 [2024-07-15 22:28:38.862520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:25.334 [2024-07-15 22:28:38.873890] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190e99d8 00:17:25.334 [2024-07-15 22:28:38.875643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.334 [2024-07-15 22:28:38.875673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:25.334 [2024-07-15 22:28:38.886973] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190ea248 00:17:25.334 [2024-07-15 22:28:38.888759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.334 [2024-07-15 22:28:38.888789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:25.334 [2024-07-15 22:28:38.900073] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190eaab8 00:17:25.334 [2024-07-15 22:28:38.901840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.334 [2024-07-15 22:28:38.901873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:25.334 [2024-07-15 22:28:38.913371] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190eb328 00:17:25.334 [2024-07-15 22:28:38.915118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.334 [2024-07-15 22:28:38.915149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:25.334 [2024-07-15 22:28:38.926786] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190ebb98 00:17:25.334 [2024-07-15 22:28:38.928496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.335 [2024-07-15 22:28:38.928529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:25.335 [2024-07-15 22:28:38.940247] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190ec408 00:17:25.335 [2024-07-15 22:28:38.941935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.335 [2024-07-15 22:28:38.941970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:25.335 [2024-07-15 22:28:38.953641] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190ecc78 00:17:25.335 [2024-07-15 22:28:38.955293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.335 [2024-07-15 22:28:38.955327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:25.335 [2024-07-15 22:28:38.967021] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190ed4e8 00:17:25.594 [2024-07-15 22:28:38.968679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.594 [2024-07-15 22:28:38.968710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:25.594 [2024-07-15 22:28:38.980405] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190edd58 00:17:25.594 [2024-07-15 22:28:38.982055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.594 [2024-07-15 22:28:38.982090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:25.594 [2024-07-15 22:28:38.993741] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190ee5c8 00:17:25.594 [2024-07-15 22:28:38.995356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.594 [2024-07-15 22:28:38.995391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:25.594 [2024-07-15 22:28:39.007024] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190eee38 00:17:25.594 [2024-07-15 22:28:39.008573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.594 [2024-07-15 22:28:39.008619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:25.594 [2024-07-15 22:28:39.019999] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190ef6a8 00:17:25.594 [2024-07-15 22:28:39.021541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.594 [2024-07-15 22:28:39.021577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:25.594 [2024-07-15 22:28:39.033153] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190eff18 00:17:25.594 [2024-07-15 22:28:39.034751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.594 [2024-07-15 22:28:39.034786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:25.594 [2024-07-15 22:28:39.046299] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f0788 00:17:25.594 [2024-07-15 22:28:39.047838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.594 [2024-07-15 22:28:39.047872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:25.594 [2024-07-15 22:28:39.059396] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f0ff8 00:17:25.594 [2024-07-15 22:28:39.060942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.594 [2024-07-15 22:28:39.060973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:25.594 [2024-07-15 22:28:39.072708] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f1868 00:17:25.594 [2024-07-15 22:28:39.074230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.594 [2024-07-15 22:28:39.074265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:25.594 [2024-07-15 22:28:39.085886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f20d8 00:17:25.594 [2024-07-15 22:28:39.087393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.594 [2024-07-15 22:28:39.087425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:25.594 [2024-07-15 22:28:39.098946] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f2948 00:17:25.594 [2024-07-15 22:28:39.100416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.594 [2024-07-15 22:28:39.100448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:25.594 [2024-07-15 22:28:39.112048] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f31b8 00:17:25.594 [2024-07-15 22:28:39.113522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.594 [2024-07-15 22:28:39.113555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:25.594 [2024-07-15 22:28:39.125130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f3a28 00:17:25.594 [2024-07-15 22:28:39.126603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.594 [2024-07-15 22:28:39.126635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:25.594 [2024-07-15 22:28:39.138454] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f4298 00:17:25.594 [2024-07-15 22:28:39.139909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.594 [2024-07-15 22:28:39.139942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:25.594 [2024-07-15 22:28:39.151618] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f4b08 00:17:25.594 [2024-07-15 22:28:39.153042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.594 [2024-07-15 22:28:39.153075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:25.594 [2024-07-15 22:28:39.164799] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f5378 00:17:25.594 [2024-07-15 22:28:39.166225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.594 [2024-07-15 22:28:39.166258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:25.594 [2024-07-15 22:28:39.178013] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f5be8 00:17:25.594 [2024-07-15 22:28:39.179403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.594 [2024-07-15 22:28:39.179435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:25.594 [2024-07-15 22:28:39.191260] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f6458 00:17:25.594 [2024-07-15 22:28:39.192634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.594 [2024-07-15 22:28:39.192662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:25.594 [2024-07-15 22:28:39.204653] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f6cc8 00:17:25.594 [2024-07-15 22:28:39.206029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.594 [2024-07-15 22:28:39.206064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:25.594 [2024-07-15 22:28:39.218064] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f7538 00:17:25.594 [2024-07-15 22:28:39.219399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.594 [2024-07-15 22:28:39.219433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:25.853 [2024-07-15 22:28:39.231335] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f7da8 00:17:25.853 [2024-07-15 22:28:39.232655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.853 [2024-07-15 22:28:39.232688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:25.853 [2024-07-15 22:28:39.244572] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f8618 00:17:25.853 [2024-07-15 22:28:39.245934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.853 [2024-07-15 22:28:39.245967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:25.853 [2024-07-15 22:28:39.257947] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f8e88 00:17:25.853 [2024-07-15 22:28:39.259222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.853 [2024-07-15 22:28:39.259254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:25.853 [2024-07-15 22:28:39.271204] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f96f8 00:17:25.853 [2024-07-15 22:28:39.272470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.853 [2024-07-15 22:28:39.272504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:25.853 [2024-07-15 22:28:39.284259] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f9f68 00:17:25.853 [2024-07-15 22:28:39.285549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.853 [2024-07-15 22:28:39.285584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:25.853 [2024-07-15 22:28:39.297065] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190fa7d8 00:17:25.853 [2024-07-15 22:28:39.298205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.853 [2024-07-15 22:28:39.298237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:25.853 [2024-07-15 22:28:39.309330] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190fb048 00:17:25.853 [2024-07-15 22:28:39.310453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.853 [2024-07-15 22:28:39.310483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:25.853 [2024-07-15 22:28:39.322263] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190fb8b8 00:17:25.853 [2024-07-15 22:28:39.323388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.853 [2024-07-15 22:28:39.323421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:25.853 [2024-07-15 22:28:39.335270] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190fc128 00:17:25.853 [2024-07-15 22:28:39.336452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.853 [2024-07-15 22:28:39.336485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:25.853 [2024-07-15 22:28:39.348468] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190fc998 00:17:25.853 [2024-07-15 22:28:39.349698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.853 [2024-07-15 22:28:39.349733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:25.853 [2024-07-15 22:28:39.361541] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190fd208 00:17:25.853 [2024-07-15 22:28:39.362664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.853 [2024-07-15 22:28:39.362694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:25.853 [2024-07-15 22:28:39.374153] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190fda78 00:17:25.853 [2024-07-15 22:28:39.375219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.853 [2024-07-15 22:28:39.375251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:25.853 [2024-07-15 22:28:39.386408] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190fe2e8 00:17:25.853 [2024-07-15 22:28:39.387434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.853 [2024-07-15 22:28:39.387466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:25.853 [2024-07-15 22:28:39.398738] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190feb58 00:17:25.853 [2024-07-15 22:28:39.399745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.853 [2024-07-15 22:28:39.399776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:25.853 [2024-07-15 22:28:39.416024] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190fef90 00:17:25.853 [2024-07-15 22:28:39.418007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.853 [2024-07-15 22:28:39.418041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.853 [2024-07-15 22:28:39.428238] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190feb58 00:17:25.853 [2024-07-15 22:28:39.430187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.854 [2024-07-15 22:28:39.430220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:25.854 [2024-07-15 22:28:39.440556] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190fe2e8 00:17:25.854 [2024-07-15 22:28:39.442567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.854 [2024-07-15 22:28:39.442605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:25.854 [2024-07-15 22:28:39.452909] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190fda78 00:17:25.854 [2024-07-15 22:28:39.454938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.854 [2024-07-15 22:28:39.454970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:25.854 [2024-07-15 22:28:39.465556] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190fd208 00:17:25.854 [2024-07-15 22:28:39.467553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.854 [2024-07-15 22:28:39.467584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:25.854 [2024-07-15 22:28:39.478131] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190fc998 00:17:25.854 [2024-07-15 22:28:39.480005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.854 [2024-07-15 22:28:39.480034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:26.112 [2024-07-15 22:28:39.490478] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190fc128 00:17:26.112 [2024-07-15 22:28:39.492342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.112 [2024-07-15 22:28:39.492370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:26.113 [2024-07-15 22:28:39.502693] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190fb8b8 00:17:26.113 [2024-07-15 22:28:39.504577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.113 [2024-07-15 22:28:39.504614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:26.113 [2024-07-15 22:28:39.514943] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190fb048 00:17:26.113 [2024-07-15 22:28:39.516770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.113 [2024-07-15 22:28:39.516799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:26.113 [2024-07-15 22:28:39.527261] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190fa7d8 00:17:26.113 [2024-07-15 22:28:39.529168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.113 [2024-07-15 22:28:39.529198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:26.113 [2024-07-15 22:28:39.539840] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f9f68 00:17:26.113 [2024-07-15 22:28:39.541752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.113 [2024-07-15 22:28:39.541783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:26.113 [2024-07-15 22:28:39.552411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f96f8 00:17:26.113 [2024-07-15 22:28:39.554373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.113 [2024-07-15 22:28:39.554407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:26.113 [2024-07-15 22:28:39.565207] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f8e88 00:17:26.113 [2024-07-15 22:28:39.567073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.113 [2024-07-15 22:28:39.567105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:26.113 [2024-07-15 22:28:39.577862] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f8618 00:17:26.113 [2024-07-15 22:28:39.579669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.113 [2024-07-15 22:28:39.579698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:26.113 [2024-07-15 22:28:39.590688] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f7da8 00:17:26.113 [2024-07-15 22:28:39.592430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.113 [2024-07-15 22:28:39.592460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:26.113 [2024-07-15 22:28:39.603180] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f7538 00:17:26.113 [2024-07-15 22:28:39.605024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.113 [2024-07-15 22:28:39.605052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:26.113 [2024-07-15 22:28:39.615916] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f6cc8 00:17:26.113 [2024-07-15 22:28:39.617708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.113 [2024-07-15 22:28:39.617741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:26.113 [2024-07-15 22:28:39.628441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f6458 00:17:26.113 [2024-07-15 22:28:39.630154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.113 [2024-07-15 22:28:39.630185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:26.113 [2024-07-15 22:28:39.640774] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f5be8 00:17:26.113 [2024-07-15 22:28:39.642546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.113 [2024-07-15 22:28:39.642577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:26.113 [2024-07-15 22:28:39.653090] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f5378 00:17:26.113 [2024-07-15 22:28:39.654773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.113 [2024-07-15 22:28:39.654804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:26.113 [2024-07-15 22:28:39.665338] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f4b08 00:17:26.113 [2024-07-15 22:28:39.667010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.113 [2024-07-15 22:28:39.667040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:26.113 [2024-07-15 22:28:39.677699] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f4298 00:17:26.113 [2024-07-15 22:28:39.679333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.113 [2024-07-15 22:28:39.679365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:26.113 [2024-07-15 22:28:39.690169] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f3a28 00:17:26.113 [2024-07-15 22:28:39.691915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.113 [2024-07-15 22:28:39.691946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:26.113 [2024-07-15 22:28:39.702508] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f31b8 00:17:26.113 [2024-07-15 22:28:39.704112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.113 [2024-07-15 22:28:39.704143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:26.113 [2024-07-15 22:28:39.714749] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a77d0) with pdu=0x2000190f2948 00:17:26.113 [2024-07-15 22:28:39.716342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.113 [2024-07-15 22:28:39.716373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:26.113 00:17:26.113 Latency(us) 00:17:26.113 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.113 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:26.113 nvme0n1 : 2.01 19359.06 75.62 0.00 0.00 6606.78 5790.33 25372.17 00:17:26.113 =================================================================================================================== 00:17:26.113 Total : 19359.06 75.62 0.00 0.00 6606.78 5790.33 25372.17 00:17:26.113 0 00:17:26.113 22:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:26.113 22:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:26.113 22:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:26.113 | .driver_specific 00:17:26.113 | .nvme_error 00:17:26.113 | .status_code 00:17:26.113 | .command_transient_transport_error' 00:17:26.113 22:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:26.372 22:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 152 > 0 )) 00:17:26.372 22:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80377 00:17:26.372 22:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80377 ']' 00:17:26.372 22:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80377 00:17:26.372 22:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:26.372 22:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:26.372 22:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80377 00:17:26.372 22:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:26.372 killing process with pid 80377 00:17:26.372 Received shutdown signal, test time was about 2.000000 seconds 00:17:26.372 00:17:26.372 Latency(us) 00:17:26.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.372 =================================================================================================================== 00:17:26.372 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:26.372 22:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:26.372 22:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80377' 00:17:26.372 22:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80377 00:17:26.372 22:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80377 00:17:26.630 22:28:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:17:26.630 22:28:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:26.630 22:28:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:26.630 22:28:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:26.630 22:28:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:26.630 22:28:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80426 00:17:26.630 22:28:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80426 /var/tmp/bperf.sock 00:17:26.630 22:28:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80426 ']' 00:17:26.630 22:28:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:26.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:26.630 22:28:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:26.630 22:28:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:26.630 22:28:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:26.630 22:28:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:26.630 22:28:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:26.630 [2024-07-15 22:28:40.227357] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:17:26.630 [2024-07-15 22:28:40.227422] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80426 ] 00:17:26.630 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:26.630 Zero copy mechanism will not be used. 00:17:26.888 [2024-07-15 22:28:40.355189] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.888 [2024-07-15 22:28:40.457715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.888 [2024-07-15 22:28:40.498698] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:27.455 22:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:27.455 22:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:27.455 22:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:27.455 22:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:27.713 22:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:27.713 22:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.713 22:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:27.713 22:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.713 22:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:27.713 22:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:27.972 nvme0n1 00:17:27.972 22:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:27.972 22:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.972 22:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:27.972 22:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.972 22:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:27.972 22:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:28.231 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:28.231 Zero copy mechanism will not be used. 00:17:28.231 Running I/O for 2 seconds... 00:17:28.231 [2024-07-15 22:28:41.672490] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.231 [2024-07-15 22:28:41.673100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.231 [2024-07-15 22:28:41.673291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.676639] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.676863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.677025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.680608] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.680669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.680691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.684380] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.684448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.684470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.688161] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.688226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.688246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.691992] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.692054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.692074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.695828] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.695904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.695924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.699609] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.699749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.699768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.703048] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.703318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.703344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.706689] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.706745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.706765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.710489] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.710550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.710570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.714226] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.714286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.714306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.718021] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.718092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.718111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.721778] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.721856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.721876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.725540] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.725620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.725640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.729325] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.729417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.729437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.733114] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.733177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.733197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.736443] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.736808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.736967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.740394] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.740479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.740500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.744126] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.744187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.744206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.747923] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.747984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.748004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.751715] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.751782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.751801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.755496] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.755556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.755576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.759394] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.759478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.759498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.763110] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.763258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.763278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.766512] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.766793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.766822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.770298] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.770374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.770394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.775372] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.775459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.775478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.780419] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.232 [2024-07-15 22:28:41.780496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.232 [2024-07-15 22:28:41.780516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.232 [2024-07-15 22:28:41.785429] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.233 [2024-07-15 22:28:41.785507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.233 [2024-07-15 22:28:41.785527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.233 [2024-07-15 22:28:41.790587] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.233 [2024-07-15 22:28:41.790678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.233 [2024-07-15 22:28:41.790698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.233 [2024-07-15 22:28:41.795931] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.233 [2024-07-15 22:28:41.796022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.233 [2024-07-15 22:28:41.796042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.233 [2024-07-15 22:28:41.801296] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.233 [2024-07-15 22:28:41.801391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.233 [2024-07-15 22:28:41.801410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.233 [2024-07-15 22:28:41.806660] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.233 [2024-07-15 22:28:41.806739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.233 [2024-07-15 22:28:41.806760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.233 [2024-07-15 22:28:41.812009] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.233 [2024-07-15 22:28:41.812091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.233 [2024-07-15 22:28:41.812110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.233 [2024-07-15 22:28:41.817255] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.233 [2024-07-15 22:28:41.817335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.233 [2024-07-15 22:28:41.817355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.233 [2024-07-15 22:28:41.822592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.233 [2024-07-15 22:28:41.822693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.233 [2024-07-15 22:28:41.822714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.233 [2024-07-15 22:28:41.827950] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.233 [2024-07-15 22:28:41.828037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.233 [2024-07-15 22:28:41.828056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.233 [2024-07-15 22:28:41.833299] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.233 [2024-07-15 22:28:41.833418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.233 [2024-07-15 22:28:41.833439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.233 [2024-07-15 22:28:41.838764] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.233 [2024-07-15 22:28:41.838847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.233 [2024-07-15 22:28:41.838868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.233 [2024-07-15 22:28:41.844126] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.233 [2024-07-15 22:28:41.844208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.233 [2024-07-15 22:28:41.844228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.233 [2024-07-15 22:28:41.849453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.233 [2024-07-15 22:28:41.849535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.233 [2024-07-15 22:28:41.849557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.233 [2024-07-15 22:28:41.854584] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.233 [2024-07-15 22:28:41.854682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.233 [2024-07-15 22:28:41.854701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.233 [2024-07-15 22:28:41.859982] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.233 [2024-07-15 22:28:41.860061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.233 [2024-07-15 22:28:41.860081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.492 [2024-07-15 22:28:41.865311] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.492 [2024-07-15 22:28:41.865409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.492 [2024-07-15 22:28:41.865429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.492 [2024-07-15 22:28:41.870729] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.492 [2024-07-15 22:28:41.870808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.492 [2024-07-15 22:28:41.870828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.492 [2024-07-15 22:28:41.876203] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.492 [2024-07-15 22:28:41.876288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.492 [2024-07-15 22:28:41.876307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.492 [2024-07-15 22:28:41.881656] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.492 [2024-07-15 22:28:41.881736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.492 [2024-07-15 22:28:41.881757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.492 [2024-07-15 22:28:41.887075] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.492 [2024-07-15 22:28:41.887160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.492 [2024-07-15 22:28:41.887180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.492 [2024-07-15 22:28:41.892466] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.492 [2024-07-15 22:28:41.892547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.492 [2024-07-15 22:28:41.892567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.492 [2024-07-15 22:28:41.897951] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.492 [2024-07-15 22:28:41.898031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.492 [2024-07-15 22:28:41.898052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.492 [2024-07-15 22:28:41.903375] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.492 [2024-07-15 22:28:41.903456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.492 [2024-07-15 22:28:41.903477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.492 [2024-07-15 22:28:41.908754] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.492 [2024-07-15 22:28:41.908838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.492 [2024-07-15 22:28:41.908858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.492 [2024-07-15 22:28:41.914129] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.492 [2024-07-15 22:28:41.914215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.492 [2024-07-15 22:28:41.914235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.492 [2024-07-15 22:28:41.919463] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.492 [2024-07-15 22:28:41.919547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.492 [2024-07-15 22:28:41.919567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.492 [2024-07-15 22:28:41.924920] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.492 [2024-07-15 22:28:41.925001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.492 [2024-07-15 22:28:41.925020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.493 [2024-07-15 22:28:41.930342] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.493 [2024-07-15 22:28:41.930434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.493 [2024-07-15 22:28:41.930454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.493 [2024-07-15 22:28:41.935730] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.493 [2024-07-15 22:28:41.935813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.493 [2024-07-15 22:28:41.935833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.493 [2024-07-15 22:28:41.941004] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.493 [2024-07-15 22:28:41.941088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.493 [2024-07-15 22:28:41.941108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.493 [2024-07-15 22:28:41.946435] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.493 [2024-07-15 22:28:41.946523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.493 [2024-07-15 22:28:41.946543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.493 [2024-07-15 22:28:41.951814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.493 [2024-07-15 22:28:41.951900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.493 [2024-07-15 22:28:41.951921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.493 [2024-07-15 22:28:41.957030] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.493 [2024-07-15 22:28:41.957104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.493 [2024-07-15 22:28:41.957124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.493 [2024-07-15 22:28:41.967891] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.493 [2024-07-15 22:28:41.968022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.493 [2024-07-15 22:28:41.968044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.493 [2024-07-15 22:28:41.976776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.493 [2024-07-15 22:28:41.976899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.493 [2024-07-15 22:28:41.976921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.493 [2024-07-15 22:28:41.982970] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.493 [2024-07-15 22:28:41.983054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.493 [2024-07-15 22:28:41.983076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.493 [2024-07-15 22:28:41.988351] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.493 [2024-07-15 22:28:41.988419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.493 [2024-07-15 22:28:41.988440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.493 [2024-07-15 22:28:41.992638] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.493 [2024-07-15 22:28:41.992697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.493 [2024-07-15 22:28:41.992717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.493 [2024-07-15 22:28:41.996829] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.493 [2024-07-15 22:28:41.996893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.493 [2024-07-15 22:28:41.996913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.493 [2024-07-15 22:28:42.000851] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.493 [2024-07-15 22:28:42.000926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.493 [2024-07-15 22:28:42.000945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.493 [2024-07-15 22:28:42.004566] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.493 [2024-07-15 22:28:42.004638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.493 [2024-07-15 22:28:42.004658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.493 [2024-07-15 22:28:42.008354] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.493 [2024-07-15 22:28:42.008416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.493 [2024-07-15 22:28:42.008436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.493 [2024-07-15 22:28:42.012216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.493 [2024-07-15 22:28:42.012308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.493 [2024-07-15 22:28:42.012328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.493 [2024-07-15 22:28:42.016031] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.493 [2024-07-15 22:28:42.016095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.493 [2024-07-15 22:28:42.016115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.493 [2024-07-15 22:28:42.019430] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.493 [2024-07-15 22:28:42.019771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.493 [2024-07-15 22:28:42.019799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.493 [2024-07-15 22:28:42.023140] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.493 [2024-07-15 22:28:42.023222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.493 [2024-07-15 22:28:42.023242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.493 [2024-07-15 22:28:42.026896] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.493 [2024-07-15 22:28:42.026955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.493 [2024-07-15 22:28:42.026974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.493 [2024-07-15 22:28:42.030659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.493 [2024-07-15 22:28:42.030728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.493 [2024-07-15 22:28:42.030750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.493 [2024-07-15 22:28:42.034359] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.493 [2024-07-15 22:28:42.034435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.494 [2024-07-15 22:28:42.034455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.494 [2024-07-15 22:28:42.038120] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.494 [2024-07-15 22:28:42.038202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.494 [2024-07-15 22:28:42.038222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.494 [2024-07-15 22:28:42.041899] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.494 [2024-07-15 22:28:42.041962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.494 [2024-07-15 22:28:42.041982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.494 [2024-07-15 22:28:42.045656] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.494 [2024-07-15 22:28:42.045798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.494 [2024-07-15 22:28:42.045817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.494 [2024-07-15 22:28:42.049064] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.494 [2024-07-15 22:28:42.049330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.494 [2024-07-15 22:28:42.049356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.494 [2024-07-15 22:28:42.052706] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.494 [2024-07-15 22:28:42.052768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.494 [2024-07-15 22:28:42.052789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.494 [2024-07-15 22:28:42.056509] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.494 [2024-07-15 22:28:42.056570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.494 [2024-07-15 22:28:42.056590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.494 [2024-07-15 22:28:42.060331] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.494 [2024-07-15 22:28:42.060388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.494 [2024-07-15 22:28:42.060408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.494 [2024-07-15 22:28:42.064128] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.494 [2024-07-15 22:28:42.064188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.494 [2024-07-15 22:28:42.064208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.494 [2024-07-15 22:28:42.067908] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.494 [2024-07-15 22:28:42.067972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.494 [2024-07-15 22:28:42.067992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.494 [2024-07-15 22:28:42.071696] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.494 [2024-07-15 22:28:42.071761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.494 [2024-07-15 22:28:42.071781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.494 [2024-07-15 22:28:42.075454] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.494 [2024-07-15 22:28:42.075528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.494 [2024-07-15 22:28:42.075547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.494 [2024-07-15 22:28:42.079338] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.494 [2024-07-15 22:28:42.079404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.494 [2024-07-15 22:28:42.079424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.494 [2024-07-15 22:28:42.083220] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.494 [2024-07-15 22:28:42.083285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.494 [2024-07-15 22:28:42.083306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.494 [2024-07-15 22:28:42.086662] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.494 [2024-07-15 22:28:42.087011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.494 [2024-07-15 22:28:42.087035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.494 [2024-07-15 22:28:42.090270] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.494 [2024-07-15 22:28:42.090365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.494 [2024-07-15 22:28:42.090386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.494 [2024-07-15 22:28:42.094079] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.494 [2024-07-15 22:28:42.094139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.494 [2024-07-15 22:28:42.094159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.494 [2024-07-15 22:28:42.097919] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.494 [2024-07-15 22:28:42.097983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.494 [2024-07-15 22:28:42.098003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.494 [2024-07-15 22:28:42.101749] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.494 [2024-07-15 22:28:42.101815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.494 [2024-07-15 22:28:42.101834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.494 [2024-07-15 22:28:42.105562] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.494 [2024-07-15 22:28:42.105659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.494 [2024-07-15 22:28:42.105680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.494 [2024-07-15 22:28:42.109394] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.494 [2024-07-15 22:28:42.109489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.494 [2024-07-15 22:28:42.109509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.494 [2024-07-15 22:28:42.113186] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.494 [2024-07-15 22:28:42.113350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.494 [2024-07-15 22:28:42.113380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.494 [2024-07-15 22:28:42.116689] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.494 [2024-07-15 22:28:42.116983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.494 [2024-07-15 22:28:42.117009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.494 [2024-07-15 22:28:42.120316] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.494 [2024-07-15 22:28:42.120376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.494 [2024-07-15 22:28:42.120396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.494 [2024-07-15 22:28:42.124173] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.494 [2024-07-15 22:28:42.124238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.494 [2024-07-15 22:28:42.124258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.753 [2024-07-15 22:28:42.128035] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.753 [2024-07-15 22:28:42.128094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.753 [2024-07-15 22:28:42.128114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.753 [2024-07-15 22:28:42.131866] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.753 [2024-07-15 22:28:42.131935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.753 [2024-07-15 22:28:42.131954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.753 [2024-07-15 22:28:42.135635] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.753 [2024-07-15 22:28:42.135711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.753 [2024-07-15 22:28:42.135730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.753 [2024-07-15 22:28:42.139464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.753 [2024-07-15 22:28:42.139522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.753 [2024-07-15 22:28:42.139541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.753 [2024-07-15 22:28:42.143271] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.753 [2024-07-15 22:28:42.143354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.753 [2024-07-15 22:28:42.143373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.753 [2024-07-15 22:28:42.147069] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.753 [2024-07-15 22:28:42.147132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.753 [2024-07-15 22:28:42.147152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.753 [2024-07-15 22:28:42.150421] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.753 [2024-07-15 22:28:42.150772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.753 [2024-07-15 22:28:42.150799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.753 [2024-07-15 22:28:42.154082] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.753 [2024-07-15 22:28:42.154162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.753 [2024-07-15 22:28:42.154182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.157953] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.158017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.158039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.161787] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.161846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.161865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.165590] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.165671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.165691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.169381] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.169473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.169493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.173165] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.173323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.173343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.177027] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.177136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.177155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.180472] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.180759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.180787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.184121] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.184181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.184200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.187938] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.187997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.188016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.191703] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.191764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.191783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.195523] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.195592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.195624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.199358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.199421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.199440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.203159] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.203218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.203238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.206940] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.207014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.207033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.210825] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.210980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.210999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.214384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.214679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.214708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.218100] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.218165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.218185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.221952] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.222018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.222038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.225818] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.225877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.225897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.229681] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.229740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.229760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.233468] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.233529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.233549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.237236] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.237337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.237356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.240995] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.241064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.241084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.244817] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.244887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.244907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.248212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.248567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.248593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.251973] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.252054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.252073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.255780] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.255843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.255863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.259678] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.259741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.259761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.263487] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.263550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.263570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.267531] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.267624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.267645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.271555] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.271638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.271658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.275410] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.275476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.275497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.279512] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.279642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.279664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.284241] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.284406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.284454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.287978] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.288292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.288323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.291917] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.291983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.292004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.295981] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.296040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.296064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.299955] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.300109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.300145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.304214] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.304384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.304444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.308769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.308832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.308858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.312723] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.312891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.312923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.316255] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.316559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.316611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.320225] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.320347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.320371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.324143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.324199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.324221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.328136] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.328276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.328316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.332276] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.332369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.332392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.337440] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.337527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.337550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.342102] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.342247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.342277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.346209] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.346306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.346333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.350262] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.350413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.350447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.354276] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.354456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.354490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.357923] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.358206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.358239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.361658] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.361734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.361757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.365618] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.365689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.365712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.369383] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.369453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.369474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.373205] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.373343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.373374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.376986] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.377078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.377099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.380920] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.381006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.381026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.754 [2024-07-15 22:28:42.384875] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:28.754 [2024-07-15 22:28:42.385053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.754 [2024-07-15 22:28:42.385073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.013 [2024-07-15 22:28:42.388345] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.013 [2024-07-15 22:28:42.388622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.013 [2024-07-15 22:28:42.388664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.013 [2024-07-15 22:28:42.392157] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.013 [2024-07-15 22:28:42.392219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.013 [2024-07-15 22:28:42.392240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.013 [2024-07-15 22:28:42.396102] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.013 [2024-07-15 22:28:42.396160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.013 [2024-07-15 22:28:42.396181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.013 [2024-07-15 22:28:42.400016] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.013 [2024-07-15 22:28:42.400083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.013 [2024-07-15 22:28:42.400106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.013 [2024-07-15 22:28:42.403914] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.013 [2024-07-15 22:28:42.404062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.013 [2024-07-15 22:28:42.404105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.013 [2024-07-15 22:28:42.408405] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.013 [2024-07-15 22:28:42.408520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.013 [2024-07-15 22:28:42.408544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.013 [2024-07-15 22:28:42.412407] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.013 [2024-07-15 22:28:42.412520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.013 [2024-07-15 22:28:42.412552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.013 [2024-07-15 22:28:42.416284] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.013 [2024-07-15 22:28:42.416481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.013 [2024-07-15 22:28:42.416515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.013 [2024-07-15 22:28:42.420052] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.013 [2024-07-15 22:28:42.420121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.013 [2024-07-15 22:28:42.420143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.013 [2024-07-15 22:28:42.424042] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.013 [2024-07-15 22:28:42.424119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.013 [2024-07-15 22:28:42.424152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.013 [2024-07-15 22:28:42.428136] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.013 [2024-07-15 22:28:42.428209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.013 [2024-07-15 22:28:42.428233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.013 [2024-07-15 22:28:42.431969] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.013 [2024-07-15 22:28:42.432152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.013 [2024-07-15 22:28:42.432185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.013 [2024-07-15 22:28:42.435635] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.013 [2024-07-15 22:28:42.435962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.013 [2024-07-15 22:28:42.436014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.013 [2024-07-15 22:28:42.439384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.013 [2024-07-15 22:28:42.439448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.013 [2024-07-15 22:28:42.439469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.013 [2024-07-15 22:28:42.443274] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.013 [2024-07-15 22:28:42.443335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.013 [2024-07-15 22:28:42.443356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.013 [2024-07-15 22:28:42.447172] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.013 [2024-07-15 22:28:42.447232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.013 [2024-07-15 22:28:42.447253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.013 [2024-07-15 22:28:42.451022] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.013 [2024-07-15 22:28:42.451107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.013 [2024-07-15 22:28:42.451128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.013 [2024-07-15 22:28:42.454888] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.013 [2024-07-15 22:28:42.454948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.013 [2024-07-15 22:28:42.454970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.014 [2024-07-15 22:28:42.458752] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.014 [2024-07-15 22:28:42.458881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.014 [2024-07-15 22:28:42.458903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.014 [2024-07-15 22:28:42.462707] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.014 [2024-07-15 22:28:42.462822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.014 [2024-07-15 22:28:42.462843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.014 [2024-07-15 22:28:42.466551] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.014 [2024-07-15 22:28:42.466631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.014 [2024-07-15 22:28:42.466655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.014 [2024-07-15 22:28:42.469980] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.014 [2024-07-15 22:28:42.470367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.014 [2024-07-15 22:28:42.470406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.014 [2024-07-15 22:28:42.473754] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.014 [2024-07-15 22:28:42.473854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.014 [2024-07-15 22:28:42.473878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.014 [2024-07-15 22:28:42.477545] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.014 [2024-07-15 22:28:42.477648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.014 [2024-07-15 22:28:42.477671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.014 [2024-07-15 22:28:42.481465] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.014 [2024-07-15 22:28:42.481528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.014 [2024-07-15 22:28:42.481551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.014 [2024-07-15 22:28:42.485406] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.014 [2024-07-15 22:28:42.485485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.014 [2024-07-15 22:28:42.485506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.014 [2024-07-15 22:28:42.489198] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.014 [2024-07-15 22:28:42.489283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.014 [2024-07-15 22:28:42.489302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.014 [2024-07-15 22:28:42.493096] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.014 [2024-07-15 22:28:42.493179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.014 [2024-07-15 22:28:42.493202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.014 [2024-07-15 22:28:42.496976] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.014 [2024-07-15 22:28:42.497135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.014 [2024-07-15 22:28:42.497156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.014 [2024-07-15 22:28:42.500414] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.014 [2024-07-15 22:28:42.500675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.014 [2024-07-15 22:28:42.500704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.014 [2024-07-15 22:28:42.504058] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.014 [2024-07-15 22:28:42.504118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.014 [2024-07-15 22:28:42.504138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.014 [2024-07-15 22:28:42.507867] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.014 [2024-07-15 22:28:42.507932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.014 [2024-07-15 22:28:42.507953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.014 [2024-07-15 22:28:42.511750] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.014 [2024-07-15 22:28:42.511812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.014 [2024-07-15 22:28:42.511832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.014 [2024-07-15 22:28:42.515457] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.014 [2024-07-15 22:28:42.515541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.014 [2024-07-15 22:28:42.515561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.014 [2024-07-15 22:28:42.519269] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.014 [2024-07-15 22:28:42.519333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.014 [2024-07-15 22:28:42.519353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.014 [2024-07-15 22:28:42.523057] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.014 [2024-07-15 22:28:42.523127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.014 [2024-07-15 22:28:42.523148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.014 [2024-07-15 22:28:42.526852] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.014 [2024-07-15 22:28:42.526955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.014 [2024-07-15 22:28:42.526975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.014 [2024-07-15 22:28:42.530617] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.014 [2024-07-15 22:28:42.530686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.014 [2024-07-15 22:28:42.530705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.014 [2024-07-15 22:28:42.534016] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.014 [2024-07-15 22:28:42.534393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.014 [2024-07-15 22:28:42.534427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.014 [2024-07-15 22:28:42.537671] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.014 [2024-07-15 22:28:42.537750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.014 [2024-07-15 22:28:42.537770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.014 [2024-07-15 22:28:42.541451] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.014 [2024-07-15 22:28:42.541517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.014 [2024-07-15 22:28:42.541536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.015 [2024-07-15 22:28:42.545286] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.015 [2024-07-15 22:28:42.545350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.015 [2024-07-15 22:28:42.545380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.015 [2024-07-15 22:28:42.549077] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.015 [2024-07-15 22:28:42.549137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.015 [2024-07-15 22:28:42.549157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.015 [2024-07-15 22:28:42.552866] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.015 [2024-07-15 22:28:42.552951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.015 [2024-07-15 22:28:42.552972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.015 [2024-07-15 22:28:42.556726] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.015 [2024-07-15 22:28:42.556813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.015 [2024-07-15 22:28:42.556833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.015 [2024-07-15 22:28:42.560521] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.015 [2024-07-15 22:28:42.560677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.015 [2024-07-15 22:28:42.560702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.015 [2024-07-15 22:28:42.563957] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.015 [2024-07-15 22:28:42.564225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.015 [2024-07-15 22:28:42.564254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.015 [2024-07-15 22:28:42.567573] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.015 [2024-07-15 22:28:42.567648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.015 [2024-07-15 22:28:42.567668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.015 [2024-07-15 22:28:42.571401] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.015 [2024-07-15 22:28:42.571467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.015 [2024-07-15 22:28:42.571487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.015 [2024-07-15 22:28:42.575270] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.015 [2024-07-15 22:28:42.575336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.015 [2024-07-15 22:28:42.575356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.015 [2024-07-15 22:28:42.579086] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.015 [2024-07-15 22:28:42.579157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.015 [2024-07-15 22:28:42.579178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.015 [2024-07-15 22:28:42.582901] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.015 [2024-07-15 22:28:42.582990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.015 [2024-07-15 22:28:42.583010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.015 [2024-07-15 22:28:42.586724] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.015 [2024-07-15 22:28:42.586832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.015 [2024-07-15 22:28:42.586851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.015 [2024-07-15 22:28:42.590578] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.015 [2024-07-15 22:28:42.590675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.015 [2024-07-15 22:28:42.590695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.015 [2024-07-15 22:28:42.594424] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.015 [2024-07-15 22:28:42.594491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.015 [2024-07-15 22:28:42.594521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.015 [2024-07-15 22:28:42.597884] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.015 [2024-07-15 22:28:42.598245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.015 [2024-07-15 22:28:42.598275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.015 [2024-07-15 22:28:42.601657] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.015 [2024-07-15 22:28:42.601731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.015 [2024-07-15 22:28:42.601752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.015 [2024-07-15 22:28:42.605341] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.015 [2024-07-15 22:28:42.605413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.015 [2024-07-15 22:28:42.605449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.015 [2024-07-15 22:28:42.609124] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.015 [2024-07-15 22:28:42.609192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.015 [2024-07-15 22:28:42.609212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.015 [2024-07-15 22:28:42.612938] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.015 [2024-07-15 22:28:42.613013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.015 [2024-07-15 22:28:42.613033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.015 [2024-07-15 22:28:42.616711] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.015 [2024-07-15 22:28:42.616786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.015 [2024-07-15 22:28:42.616807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.015 [2024-07-15 22:28:42.620533] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.015 [2024-07-15 22:28:42.620651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.015 [2024-07-15 22:28:42.620672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.015 [2024-07-15 22:28:42.624326] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.015 [2024-07-15 22:28:42.624465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.015 [2024-07-15 22:28:42.624491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.015 [2024-07-15 22:28:42.627760] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.016 [2024-07-15 22:28:42.628023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.016 [2024-07-15 22:28:42.628051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.016 [2024-07-15 22:28:42.631324] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.016 [2024-07-15 22:28:42.631387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.016 [2024-07-15 22:28:42.631407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.016 [2024-07-15 22:28:42.635113] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.016 [2024-07-15 22:28:42.635171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.016 [2024-07-15 22:28:42.635191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.016 [2024-07-15 22:28:42.638862] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.016 [2024-07-15 22:28:42.638921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.016 [2024-07-15 22:28:42.638941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.016 [2024-07-15 22:28:42.642719] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.016 [2024-07-15 22:28:42.642785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.016 [2024-07-15 22:28:42.642805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.274 [2024-07-15 22:28:42.646556] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.274 [2024-07-15 22:28:42.646651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.274 [2024-07-15 22:28:42.646672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.274 [2024-07-15 22:28:42.650404] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.274 [2024-07-15 22:28:42.650539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.274 [2024-07-15 22:28:42.650561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.274 [2024-07-15 22:28:42.654289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.274 [2024-07-15 22:28:42.654373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.274 [2024-07-15 22:28:42.654394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.274 [2024-07-15 22:28:42.658174] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.274 [2024-07-15 22:28:42.658241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.274 [2024-07-15 22:28:42.658262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.274 [2024-07-15 22:28:42.661677] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.274 [2024-07-15 22:28:42.662029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.662059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.665445] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.665528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.665548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.669197] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.669265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.669285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.672977] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.673052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.673073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.676794] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.676883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.676904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.680638] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.680718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.680739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.684477] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.684556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.684576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.688302] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.688440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.688462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.691752] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.692016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.692045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.695441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.695502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.695522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.699262] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.699341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.699361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.703073] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.703161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.703181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.706833] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.706892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.706912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.710612] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.710697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.710716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.714361] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.714436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.714457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.718153] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.718232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.718253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.722029] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.722094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.722115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.725408] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.725778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.725807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.729120] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.729199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.729219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.732906] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.732971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.732990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.736758] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.736836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.736857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.740535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.740616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.740636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.744415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.744475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.744496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.748241] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.748319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.748339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.752078] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.752221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.752247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.755547] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.755815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.755843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.759217] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.759279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.759299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.763044] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.763112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.763132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.766824] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.766885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.766904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.770586] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.770672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.770692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.774334] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.774425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.774445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.778137] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.778207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.778228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.781971] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.782075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.782098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.785785] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.785855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.785876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.789170] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.789524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.789569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.792825] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.792900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.792920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.796576] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.796648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.796668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.800320] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.800385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.800405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.804084] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.275 [2024-07-15 22:28:42.804142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-07-15 22:28:42.804162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.275 [2024-07-15 22:28:42.807960] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.808084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.808105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.276 [2024-07-15 22:28:42.811724] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.811874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.811905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.276 [2024-07-15 22:28:42.815508] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.815568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.815588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.276 [2024-07-15 22:28:42.818948] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.819377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.819412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.276 [2024-07-15 22:28:42.823009] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.823383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.823411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.276 [2024-07-15 22:28:42.826752] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.826816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.826838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.276 [2024-07-15 22:28:42.830548] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.830623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.830643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.276 [2024-07-15 22:28:42.834374] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.834444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.834466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.276 [2024-07-15 22:28:42.838220] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.838279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.838301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.276 [2024-07-15 22:28:42.841981] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.842048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.842069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.276 [2024-07-15 22:28:42.845775] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.845855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.845875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.276 [2024-07-15 22:28:42.849621] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.849799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.849819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.276 [2024-07-15 22:28:42.853462] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.853590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.853623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.276 [2024-07-15 22:28:42.856870] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.857134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.857163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.276 [2024-07-15 22:28:42.860519] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.860580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.860611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.276 [2024-07-15 22:28:42.864313] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.864372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.864391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.276 [2024-07-15 22:28:42.868132] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.868185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.868205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.276 [2024-07-15 22:28:42.871938] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.872024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.872043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.276 [2024-07-15 22:28:42.875781] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.875854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.875874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.276 [2024-07-15 22:28:42.879656] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.879728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.879748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.276 [2024-07-15 22:28:42.883530] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.883627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.883648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.276 [2024-07-15 22:28:42.887426] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.887581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.887622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.276 [2024-07-15 22:28:42.890969] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.891262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.891291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.276 [2024-07-15 22:28:42.894516] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.894579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.894613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.276 [2024-07-15 22:28:42.898295] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.898372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.898394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.276 [2024-07-15 22:28:42.901972] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.902053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.902072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.276 [2024-07-15 22:28:42.905757] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.276 [2024-07-15 22:28:42.905828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-07-15 22:28:42.905849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.535 [2024-07-15 22:28:42.909492] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.535 [2024-07-15 22:28:42.909619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.535 [2024-07-15 22:28:42.909639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.535 [2024-07-15 22:28:42.913272] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.535 [2024-07-15 22:28:42.913337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.535 [2024-07-15 22:28:42.913357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.535 [2024-07-15 22:28:42.917110] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.535 [2024-07-15 22:28:42.917184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.535 [2024-07-15 22:28:42.917205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.535 [2024-07-15 22:28:42.920873] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.535 [2024-07-15 22:28:42.920939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.535 [2024-07-15 22:28:42.920958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.535 [2024-07-15 22:28:42.924330] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.535 [2024-07-15 22:28:42.924699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.535 [2024-07-15 22:28:42.924728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.535 [2024-07-15 22:28:42.927966] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:42.928046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:42.928066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:42.931715] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:42.931765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:42.931786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:42.935517] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:42.935577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:42.935611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:42.939248] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:42.939323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:42.939342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:42.943085] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:42.943169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:42.943189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:42.946925] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:42.947010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:42.947029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:42.950648] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:42.950798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:42.950817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:42.954063] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:42.954315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:42.954344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:42.957727] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:42.957789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:42.957808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:42.961646] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:42.961707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:42.961729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:42.965466] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:42.965537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:42.965559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:42.969374] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:42.969466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:42.969488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:42.973266] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:42.973377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:42.973399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:42.977182] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:42.977261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:42.977284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:42.981235] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:42.981335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:42.981357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:42.985330] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:42.985410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:42.985433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:42.989435] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:42.989859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:42.989895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:42.993638] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:42.993704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:42.993726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:42.998180] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:42.998251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:42.998273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:43.002150] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:43.002214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:43.002234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:43.006085] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:43.006151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:43.006171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:43.010071] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:43.010144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:43.010166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:43.014020] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:43.014100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:43.014121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:43.018025] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:43.018116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:43.018138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:43.021992] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:43.022144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:43.022181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:43.025606] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:43.025912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:43.025942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:43.029395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:43.029468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:43.029490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:43.033432] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:43.033491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:43.033513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:43.037447] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:43.037503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.536 [2024-07-15 22:28:43.037525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.536 [2024-07-15 22:28:43.041273] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.536 [2024-07-15 22:28:43.041373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.041393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.045053] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.045152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.045172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.049210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.049387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.049419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.053103] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.053183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.053204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.056981] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.057065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.057097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.060531] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.060877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.060905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.064242] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.064334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.064355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.068090] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.068164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.068185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.072053] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.072113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.072134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.076064] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.076219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.076251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.080147] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.080245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.080275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.084140] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.084229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.084258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.088150] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.088316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.088347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.091916] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.092192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.092222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.095880] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.095954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.095976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.100034] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.100111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.100142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.104001] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.104090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.104123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.108009] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.108176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.108197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.111976] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.112054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.112075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.115749] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.115906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.115936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.119552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.119673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.119701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.123639] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.123766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.123787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.127154] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.127409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.127439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.130944] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.131019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.131040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.134926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.134995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.135016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.138843] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.138921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.138943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.142849] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.142914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.142935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.146845] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.146965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.146996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.150931] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.151064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.151108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.537 [2024-07-15 22:28:43.154892] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.537 [2024-07-15 22:28:43.154993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.537 [2024-07-15 22:28:43.155015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.538 [2024-07-15 22:28:43.158935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.538 [2024-07-15 22:28:43.159004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.538 [2024-07-15 22:28:43.159041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.538 [2024-07-15 22:28:43.162497] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.538 [2024-07-15 22:28:43.162854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.538 [2024-07-15 22:28:43.162900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.538 [2024-07-15 22:28:43.166338] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.538 [2024-07-15 22:28:43.166419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.538 [2024-07-15 22:28:43.166439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.797 [2024-07-15 22:28:43.170179] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.797 [2024-07-15 22:28:43.170242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.797 [2024-07-15 22:28:43.170263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.797 [2024-07-15 22:28:43.174136] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.797 [2024-07-15 22:28:43.174201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.797 [2024-07-15 22:28:43.174221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.797 [2024-07-15 22:28:43.177883] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.797 [2024-07-15 22:28:43.178036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.797 [2024-07-15 22:28:43.178068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.797 [2024-07-15 22:28:43.181632] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.797 [2024-07-15 22:28:43.181702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.797 [2024-07-15 22:28:43.181723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.797 [2024-07-15 22:28:43.185525] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.797 [2024-07-15 22:28:43.185622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.797 [2024-07-15 22:28:43.185644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.797 [2024-07-15 22:28:43.189400] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.797 [2024-07-15 22:28:43.189559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.797 [2024-07-15 22:28:43.189581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.797 [2024-07-15 22:28:43.193036] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.797 [2024-07-15 22:28:43.193333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.797 [2024-07-15 22:28:43.193372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.797 [2024-07-15 22:28:43.196907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.797 [2024-07-15 22:28:43.197052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.797 [2024-07-15 22:28:43.197074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.797 [2024-07-15 22:28:43.200879] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.797 [2024-07-15 22:28:43.200944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.797 [2024-07-15 22:28:43.200982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.797 [2024-07-15 22:28:43.205093] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.797 [2024-07-15 22:28:43.205176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.797 [2024-07-15 22:28:43.205197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.797 [2024-07-15 22:28:43.208971] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.797 [2024-07-15 22:28:43.209055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.797 [2024-07-15 22:28:43.209075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.797 [2024-07-15 22:28:43.213129] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.797 [2024-07-15 22:28:43.213243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.797 [2024-07-15 22:28:43.213265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.797 [2024-07-15 22:28:43.217198] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.797 [2024-07-15 22:28:43.217335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.797 [2024-07-15 22:28:43.217356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.797 [2024-07-15 22:28:43.221267] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.797 [2024-07-15 22:28:43.221382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.797 [2024-07-15 22:28:43.221421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.797 [2024-07-15 22:28:43.225259] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.797 [2024-07-15 22:28:43.225321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.797 [2024-07-15 22:28:43.225356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.797 [2024-07-15 22:28:43.228795] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.797 [2024-07-15 22:28:43.229137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.797 [2024-07-15 22:28:43.229165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.797 [2024-07-15 22:28:43.232672] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.797 [2024-07-15 22:28:43.232747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.798 [2024-07-15 22:28:43.232768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.798 [2024-07-15 22:28:43.236647] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.798 [2024-07-15 22:28:43.236705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.798 [2024-07-15 22:28:43.236727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.798 [2024-07-15 22:28:43.240548] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.798 [2024-07-15 22:28:43.240619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.798 [2024-07-15 22:28:43.240640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.798 [2024-07-15 22:28:43.244597] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.798 [2024-07-15 22:28:43.244704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.798 [2024-07-15 22:28:43.244726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.798 [2024-07-15 22:28:43.248508] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.798 [2024-07-15 22:28:43.248600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.798 [2024-07-15 22:28:43.248635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.798 [2024-07-15 22:28:43.252448] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.798 [2024-07-15 22:28:43.252528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.798 [2024-07-15 22:28:43.252548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.798 [2024-07-15 22:28:43.256342] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.798 [2024-07-15 22:28:43.256483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.798 [2024-07-15 22:28:43.256503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.798 [2024-07-15 22:28:43.259918] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.798 [2024-07-15 22:28:43.260207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.798 [2024-07-15 22:28:43.260236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.798 [2024-07-15 22:28:43.263707] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.798 [2024-07-15 22:28:43.263770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.798 [2024-07-15 22:28:43.263790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.798 [2024-07-15 22:28:43.267666] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.798 [2024-07-15 22:28:43.267731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.798 [2024-07-15 22:28:43.267751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.798 [2024-07-15 22:28:43.271569] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.798 [2024-07-15 22:28:43.271659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.798 [2024-07-15 22:28:43.271680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.798 [2024-07-15 22:28:43.275370] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.798 [2024-07-15 22:28:43.275428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.798 [2024-07-15 22:28:43.275448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.798 [2024-07-15 22:28:43.279355] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.798 [2024-07-15 22:28:43.279412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.798 [2024-07-15 22:28:43.279448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.798 [2024-07-15 22:28:43.283197] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.798 [2024-07-15 22:28:43.283282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.798 [2024-07-15 22:28:43.283302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.798 [2024-07-15 22:28:43.287485] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.798 [2024-07-15 22:28:43.287547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.798 [2024-07-15 22:28:43.287568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.798 [2024-07-15 22:28:43.291472] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.798 [2024-07-15 22:28:43.291581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.798 [2024-07-15 22:28:43.291615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.798 [2024-07-15 22:28:43.295353] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.798 [2024-07-15 22:28:43.295419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.798 [2024-07-15 22:28:43.295440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.798 [2024-07-15 22:28:43.299532] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.798 [2024-07-15 22:28:43.299973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.798 [2024-07-15 22:28:43.300022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.798 [2024-07-15 22:28:43.303867] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.798 [2024-07-15 22:28:43.304279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.798 [2024-07-15 22:28:43.304315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.798 [2024-07-15 22:28:43.307966] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.798 [2024-07-15 22:28:43.308031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.798 [2024-07-15 22:28:43.308068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.798 [2024-07-15 22:28:43.312485] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.798 [2024-07-15 22:28:43.312571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.798 [2024-07-15 22:28:43.312592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.798 [2024-07-15 22:28:43.316521] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.798 [2024-07-15 22:28:43.316580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.798 [2024-07-15 22:28:43.316617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.798 [2024-07-15 22:28:43.320437] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.798 [2024-07-15 22:28:43.320521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.799 [2024-07-15 22:28:43.320541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.799 [2024-07-15 22:28:43.324419] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.799 [2024-07-15 22:28:43.324488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.799 [2024-07-15 22:28:43.324509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.799 [2024-07-15 22:28:43.328379] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.799 [2024-07-15 22:28:43.328454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.799 [2024-07-15 22:28:43.328491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.799 [2024-07-15 22:28:43.332294] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.799 [2024-07-15 22:28:43.332358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.799 [2024-07-15 22:28:43.332380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.799 [2024-07-15 22:28:43.336185] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.799 [2024-07-15 22:28:43.336401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.799 [2024-07-15 22:28:43.336423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.799 [2024-07-15 22:28:43.339775] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.799 [2024-07-15 22:28:43.340024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.799 [2024-07-15 22:28:43.340044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.799 [2024-07-15 22:28:43.343503] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.799 [2024-07-15 22:28:43.343570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.799 [2024-07-15 22:28:43.343590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.799 [2024-07-15 22:28:43.347370] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.799 [2024-07-15 22:28:43.347432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.799 [2024-07-15 22:28:43.347453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.799 [2024-07-15 22:28:43.351368] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.799 [2024-07-15 22:28:43.351433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.799 [2024-07-15 22:28:43.351454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.799 [2024-07-15 22:28:43.355203] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.799 [2024-07-15 22:28:43.355276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.799 [2024-07-15 22:28:43.355297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.799 [2024-07-15 22:28:43.359139] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.799 [2024-07-15 22:28:43.359257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.799 [2024-07-15 22:28:43.359279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.799 [2024-07-15 22:28:43.363081] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.799 [2024-07-15 22:28:43.363198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.799 [2024-07-15 22:28:43.363218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.799 [2024-07-15 22:28:43.367041] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.799 [2024-07-15 22:28:43.367128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.799 [2024-07-15 22:28:43.367149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.799 [2024-07-15 22:28:43.370919] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.799 [2024-07-15 22:28:43.370994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.799 [2024-07-15 22:28:43.371016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.799 [2024-07-15 22:28:43.374310] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.799 [2024-07-15 22:28:43.374697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.799 [2024-07-15 22:28:43.374728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.799 [2024-07-15 22:28:43.378050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.799 [2024-07-15 22:28:43.378139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.799 [2024-07-15 22:28:43.378160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.799 [2024-07-15 22:28:43.381857] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.799 [2024-07-15 22:28:43.381921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.799 [2024-07-15 22:28:43.381943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.799 [2024-07-15 22:28:43.385720] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.799 [2024-07-15 22:28:43.385797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.799 [2024-07-15 22:28:43.385818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.799 [2024-07-15 22:28:43.389599] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.799 [2024-07-15 22:28:43.389677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.799 [2024-07-15 22:28:43.389697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.799 [2024-07-15 22:28:43.393453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.799 [2024-07-15 22:28:43.393529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.799 [2024-07-15 22:28:43.393550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.799 [2024-07-15 22:28:43.397247] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.799 [2024-07-15 22:28:43.397411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.799 [2024-07-15 22:28:43.397432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.799 [2024-07-15 22:28:43.401127] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.799 [2024-07-15 22:28:43.401234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.799 [2024-07-15 22:28:43.401254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.799 [2024-07-15 22:28:43.404701] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.799 [2024-07-15 22:28:43.404976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.799 [2024-07-15 22:28:43.404998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.800 [2024-07-15 22:28:43.408376] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.800 [2024-07-15 22:28:43.408433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.800 [2024-07-15 22:28:43.408470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.800 [2024-07-15 22:28:43.412284] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.800 [2024-07-15 22:28:43.412338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.800 [2024-07-15 22:28:43.412358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.800 [2024-07-15 22:28:43.416129] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.800 [2024-07-15 22:28:43.416203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.800 [2024-07-15 22:28:43.416222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.800 [2024-07-15 22:28:43.419939] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.800 [2024-07-15 22:28:43.419998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.800 [2024-07-15 22:28:43.420017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.800 [2024-07-15 22:28:43.423721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.800 [2024-07-15 22:28:43.423835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.800 [2024-07-15 22:28:43.423854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.800 [2024-07-15 22:28:43.427739] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:29.800 [2024-07-15 22:28:43.427884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.800 [2024-07-15 22:28:43.427908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.060 [2024-07-15 22:28:43.431412] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.060 [2024-07-15 22:28:43.431475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.060 [2024-07-15 22:28:43.431496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.434846] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.435273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.435315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.438800] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.438881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.438904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.442535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.442632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.442654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.446530] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.446648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.446671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.450497] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.450682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.450713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.454550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.454708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.454730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.458438] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.458542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.458564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.462348] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.462499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.462521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.465983] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.466251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.466284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.469683] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.469757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.469780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.473554] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.473634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.473656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.477380] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.477445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.477483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.481231] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.481287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.481324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.485187] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.485269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.485289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.489018] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.489077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.489114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.492994] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.493079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.493101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.496844] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.497023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.497043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.500382] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.500662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.500682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.504033] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.504111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.504133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.507874] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.507936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.507956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.511748] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.511812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.511832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.515499] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.515569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.515592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.519260] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.519382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.519402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.523227] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.523335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.523356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.527133] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.527202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.527222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.530620] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.530957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.530991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.534353] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.534443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.534479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.061 [2024-07-15 22:28:43.538319] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.061 [2024-07-15 22:28:43.538379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.061 [2024-07-15 22:28:43.538401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.542191] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.542288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.542310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.545972] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.546059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.546082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.549820] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.549999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.550021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.553653] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.553735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.553756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.557030] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.557200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.557220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.560613] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.560671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.560694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.564431] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.564490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.564519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.568398] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.568461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.568483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.572334] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.572421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.572443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.576127] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.576282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.576303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.580063] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.580150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.580172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.584021] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.584088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.584111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.587568] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.587959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.588003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.591434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.591514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.591534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.595474] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.595541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.595561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.599354] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.599445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.599466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.603364] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.603455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.603476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.607231] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.607296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.607316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.611120] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.611183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.611207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.615015] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.615170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.615191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.618504] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.618812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.618838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.622215] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.622276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.622298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.626171] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.626236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.626258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.630080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.630194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.630216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.634042] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.634108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.634130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.637955] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.638022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.638044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.641970] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.642144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.062 [2024-07-15 22:28:43.642168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.062 [2024-07-15 22:28:43.646077] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.062 [2024-07-15 22:28:43.646234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.063 [2024-07-15 22:28:43.646268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.063 [2024-07-15 22:28:43.649983] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.063 [2024-07-15 22:28:43.650058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.063 [2024-07-15 22:28:43.650080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.063 [2024-07-15 22:28:43.653442] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.063 [2024-07-15 22:28:43.653824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.063 [2024-07-15 22:28:43.653861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.063 [2024-07-15 22:28:43.657275] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a7b10) with pdu=0x2000190fef90 00:17:30.063 [2024-07-15 22:28:43.657355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.063 [2024-07-15 22:28:43.657387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.063 00:17:30.063 Latency(us) 00:17:30.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.063 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:30.063 nvme0n1 : 2.00 7771.21 971.40 0.00 0.00 2055.00 1342.30 9685.64 00:17:30.063 =================================================================================================================== 00:17:30.063 Total : 7771.21 971.40 0.00 0.00 2055.00 1342.30 9685.64 00:17:30.063 0 00:17:30.063 22:28:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:30.063 22:28:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:30.063 22:28:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:30.063 22:28:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:30.063 | .driver_specific 00:17:30.063 | .nvme_error 00:17:30.063 | .status_code 00:17:30.063 | .command_transient_transport_error' 00:17:30.322 22:28:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 501 > 0 )) 00:17:30.322 22:28:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80426 00:17:30.322 22:28:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80426 ']' 00:17:30.322 22:28:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80426 00:17:30.322 22:28:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:30.322 22:28:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:30.322 22:28:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80426 00:17:30.322 22:28:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:30.322 22:28:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:30.322 killing process with pid 80426 00:17:30.322 22:28:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80426' 00:17:30.322 22:28:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80426 00:17:30.322 Received shutdown signal, test time was about 2.000000 seconds 00:17:30.322 00:17:30.322 Latency(us) 00:17:30.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.322 =================================================================================================================== 00:17:30.322 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:30.322 22:28:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80426 00:17:30.581 22:28:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80230 00:17:30.581 22:28:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80230 ']' 00:17:30.581 22:28:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80230 00:17:30.581 22:28:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:30.581 22:28:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:30.581 22:28:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80230 00:17:30.581 22:28:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:30.581 22:28:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:30.581 killing process with pid 80230 00:17:30.581 22:28:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80230' 00:17:30.581 22:28:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80230 00:17:30.581 22:28:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80230 00:17:30.838 00:17:30.838 real 0m17.217s 00:17:30.838 user 0m31.818s 00:17:30.838 sys 0m5.266s 00:17:30.838 22:28:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:30.838 22:28:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:30.838 ************************************ 00:17:30.838 END TEST nvmf_digest_error 00:17:30.838 ************************************ 00:17:30.838 22:28:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:17:30.838 22:28:44 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:30.838 22:28:44 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:17:30.838 22:28:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:30.838 22:28:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:17:31.097 22:28:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:31.097 22:28:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:17:31.097 22:28:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:31.097 22:28:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:31.097 rmmod nvme_tcp 00:17:31.097 rmmod nvme_fabrics 00:17:31.097 rmmod nvme_keyring 00:17:31.097 22:28:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:31.097 22:28:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:17:31.097 22:28:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:17:31.097 22:28:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 80230 ']' 00:17:31.097 22:28:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 80230 00:17:31.097 22:28:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 80230 ']' 00:17:31.097 22:28:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 80230 00:17:31.097 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (80230) - No such process 00:17:31.097 Process with pid 80230 is not found 00:17:31.097 22:28:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 80230 is not found' 00:17:31.097 22:28:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:31.097 22:28:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:31.097 22:28:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:31.097 22:28:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:31.097 22:28:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:31.097 22:28:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.097 22:28:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:31.097 22:28:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.097 22:28:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:31.097 00:17:31.097 real 0m35.549s 00:17:31.097 user 1m4.386s 00:17:31.097 sys 0m10.874s 00:17:31.097 22:28:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:31.097 ************************************ 00:17:31.097 END TEST nvmf_digest 00:17:31.097 ************************************ 00:17:31.097 22:28:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:31.097 22:28:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:31.097 22:28:44 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:17:31.097 22:28:44 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:17:31.097 22:28:44 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:31.097 22:28:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:31.097 22:28:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:31.097 22:28:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:31.097 ************************************ 00:17:31.097 START TEST nvmf_host_multipath 00:17:31.097 ************************************ 00:17:31.097 22:28:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:31.356 * Looking for test storage... 00:17:31.356 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:31.356 22:28:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:31.356 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:31.356 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.356 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.356 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.356 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.356 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:31.356 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:31.356 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.356 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:31.356 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.356 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:31.356 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:17:31.356 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:17:31.356 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.356 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:31.356 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:31.356 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.356 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:31.356 22:28:44 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.356 22:28:44 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.356 22:28:44 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:31.357 Cannot find device "nvmf_tgt_br" 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:31.357 Cannot find device "nvmf_tgt_br2" 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:31.357 Cannot find device "nvmf_tgt_br" 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:31.357 Cannot find device "nvmf_tgt_br2" 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:31.357 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:31.357 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:31.616 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:17:31.616 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:31.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:31.616 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:17:31.616 22:28:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:31.616 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:31.616 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:31.616 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:31.616 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:31.616 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:31.616 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:31.616 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:31.616 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:31.616 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:31.616 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:31.616 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:31.617 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:31.617 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:31.617 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:31.617 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:31.617 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:31.617 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:31.617 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:31.617 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:31.617 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:31.617 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:31.617 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:31.617 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:31.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:31.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:17:31.617 00:17:31.617 --- 10.0.0.2 ping statistics --- 00:17:31.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.617 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:17:31.617 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:31.617 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:31.617 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:17:31.617 00:17:31.617 --- 10.0.0.3 ping statistics --- 00:17:31.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.617 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:17:31.617 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:31.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:31.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:17:31.876 00:17:31.876 --- 10.0.0.1 ping statistics --- 00:17:31.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.876 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:31.876 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:31.876 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:17:31.876 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:31.876 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:31.876 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:31.876 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:31.876 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:31.876 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:31.876 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:31.876 22:28:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:31.876 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:31.876 22:28:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:31.876 22:28:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:31.876 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=80691 00:17:31.876 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 80691 00:17:31.876 22:28:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:31.876 22:28:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 80691 ']' 00:17:31.876 22:28:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.876 22:28:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:31.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.876 22:28:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.876 22:28:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:31.876 22:28:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:31.876 [2024-07-15 22:28:45.345788] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:17:31.876 [2024-07-15 22:28:45.345859] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.876 [2024-07-15 22:28:45.489341] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:32.134 [2024-07-15 22:28:45.590815] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.134 [2024-07-15 22:28:45.590870] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.134 [2024-07-15 22:28:45.590879] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:32.134 [2024-07-15 22:28:45.590887] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:32.134 [2024-07-15 22:28:45.590894] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.134 [2024-07-15 22:28:45.591008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.134 [2024-07-15 22:28:45.591013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.134 [2024-07-15 22:28:45.634165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:32.700 22:28:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:32.700 22:28:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:17:32.700 22:28:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:32.700 22:28:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:32.700 22:28:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:32.700 22:28:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.700 22:28:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80691 00:17:32.700 22:28:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:32.959 [2024-07-15 22:28:46.486506] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:32.959 22:28:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:33.233 Malloc0 00:17:33.233 22:28:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:33.491 22:28:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:33.749 22:28:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:33.749 [2024-07-15 22:28:47.361693] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.749 22:28:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:34.007 [2024-07-15 22:28:47.557747] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:34.007 22:28:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80743 00:17:34.007 22:28:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:34.007 22:28:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:34.007 22:28:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80743 /var/tmp/bdevperf.sock 00:17:34.007 22:28:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 80743 ']' 00:17:34.007 22:28:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:34.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:34.007 22:28:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:34.007 22:28:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:34.007 22:28:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:34.007 22:28:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:34.941 22:28:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:34.941 22:28:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:17:34.941 22:28:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:35.198 22:28:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:35.456 Nvme0n1 00:17:35.456 22:28:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:36.023 Nvme0n1 00:17:36.023 22:28:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:17:36.023 22:28:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:36.956 22:28:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:36.956 22:28:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:37.213 22:28:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:37.213 22:28:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:37.213 22:28:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80788 00:17:37.213 22:28:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80691 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:37.213 22:28:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:43.829 22:28:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:43.829 22:28:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:43.829 22:28:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:43.829 22:28:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:43.829 Attaching 4 probes... 00:17:43.829 @path[10.0.0.2, 4421]: 21861 00:17:43.829 @path[10.0.0.2, 4421]: 22599 00:17:43.829 @path[10.0.0.2, 4421]: 22587 00:17:43.829 @path[10.0.0.2, 4421]: 22445 00:17:43.829 @path[10.0.0.2, 4421]: 22591 00:17:43.829 22:28:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:43.829 22:28:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:43.829 22:28:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:43.829 22:28:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:43.829 22:28:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:43.829 22:28:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:43.829 22:28:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80788 00:17:43.829 22:28:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:43.829 22:28:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:17:43.829 22:28:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:43.829 22:28:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:44.111 22:28:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:17:44.111 22:28:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80900 00:17:44.111 22:28:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:44.111 22:28:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80691 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:50.699 22:29:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:50.699 22:29:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:50.699 22:29:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:17:50.699 22:29:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:50.699 Attaching 4 probes... 00:17:50.699 @path[10.0.0.2, 4420]: 22474 00:17:50.699 @path[10.0.0.2, 4420]: 22653 00:17:50.699 @path[10.0.0.2, 4420]: 22653 00:17:50.699 @path[10.0.0.2, 4420]: 22808 00:17:50.699 @path[10.0.0.2, 4420]: 22860 00:17:50.699 22:29:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:50.699 22:29:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:50.699 22:29:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:50.699 22:29:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:17:50.699 22:29:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:50.699 22:29:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:50.699 22:29:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80900 00:17:50.699 22:29:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:50.699 22:29:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:17:50.699 22:29:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:50.699 22:29:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:50.699 22:29:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:17:50.699 22:29:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81018 00:17:50.699 22:29:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80691 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:50.699 22:29:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:57.258 22:29:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:57.258 22:29:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:57.258 22:29:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:57.258 22:29:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:57.258 Attaching 4 probes... 00:17:57.258 @path[10.0.0.2, 4421]: 16421 00:17:57.258 @path[10.0.0.2, 4421]: 22031 00:17:57.258 @path[10.0.0.2, 4421]: 22586 00:17:57.258 @path[10.0.0.2, 4421]: 22522 00:17:57.258 @path[10.0.0.2, 4421]: 22292 00:17:57.258 22:29:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:57.258 22:29:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:57.258 22:29:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:57.258 22:29:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:57.258 22:29:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:57.258 22:29:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:57.258 22:29:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81018 00:17:57.258 22:29:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:57.258 22:29:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:17:57.258 22:29:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:57.258 22:29:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:57.258 22:29:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:17:57.258 22:29:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81129 00:17:57.258 22:29:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80691 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:57.258 22:29:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:03.832 22:29:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:03.832 22:29:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:03.832 22:29:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:03.832 22:29:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:03.832 Attaching 4 probes... 00:18:03.832 00:18:03.832 00:18:03.832 00:18:03.832 00:18:03.832 00:18:03.832 22:29:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:03.832 22:29:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:03.832 22:29:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:03.832 22:29:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:03.832 22:29:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:03.832 22:29:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:03.832 22:29:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81129 00:18:03.832 22:29:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:03.832 22:29:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:03.832 22:29:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:03.832 22:29:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:04.097 22:29:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:04.097 22:29:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81243 00:18:04.097 22:29:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80691 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:04.097 22:29:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:10.656 22:29:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:10.656 22:29:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:10.656 22:29:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:10.656 22:29:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:10.656 Attaching 4 probes... 00:18:10.656 @path[10.0.0.2, 4421]: 21271 00:18:10.656 @path[10.0.0.2, 4421]: 21360 00:18:10.656 @path[10.0.0.2, 4421]: 21719 00:18:10.656 @path[10.0.0.2, 4421]: 21775 00:18:10.656 @path[10.0.0.2, 4421]: 21986 00:18:10.656 22:29:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:10.656 22:29:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:10.656 22:29:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:10.656 22:29:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:10.656 22:29:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:10.656 22:29:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:10.656 22:29:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81243 00:18:10.656 22:29:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:10.656 22:29:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:10.656 22:29:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:18:11.590 22:29:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:11.590 22:29:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81365 00:18:11.590 22:29:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80691 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:11.590 22:29:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:18.190 22:29:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:18.190 22:29:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:18.190 22:29:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:18.190 22:29:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:18.190 Attaching 4 probes... 00:18:18.190 @path[10.0.0.2, 4420]: 22015 00:18:18.190 @path[10.0.0.2, 4420]: 22746 00:18:18.190 @path[10.0.0.2, 4420]: 22834 00:18:18.190 @path[10.0.0.2, 4420]: 22708 00:18:18.190 @path[10.0.0.2, 4420]: 22496 00:18:18.190 22:29:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:18.190 22:29:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:18.190 22:29:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:18.190 22:29:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:18.190 22:29:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:18.190 22:29:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:18.190 22:29:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81365 00:18:18.190 22:29:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:18.190 22:29:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:18.190 [2024-07-15 22:29:31.326416] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:18.190 22:29:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:18.190 22:29:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:18:24.743 22:29:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:18:24.743 22:29:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81535 00:18:24.743 22:29:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:24.743 22:29:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80691 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:30.050 22:29:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:30.050 22:29:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:30.307 22:29:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:30.307 22:29:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:30.307 Attaching 4 probes... 00:18:30.307 @path[10.0.0.2, 4421]: 22359 00:18:30.307 @path[10.0.0.2, 4421]: 22735 00:18:30.307 @path[10.0.0.2, 4421]: 22639 00:18:30.307 @path[10.0.0.2, 4421]: 22690 00:18:30.307 @path[10.0.0.2, 4421]: 22505 00:18:30.307 22:29:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:30.307 22:29:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:30.307 22:29:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:30.307 22:29:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:30.307 22:29:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:30.307 22:29:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:30.307 22:29:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81535 00:18:30.307 22:29:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:30.307 22:29:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80743 00:18:30.307 22:29:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 80743 ']' 00:18:30.307 22:29:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 80743 00:18:30.307 22:29:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:18:30.307 22:29:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:30.307 22:29:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80743 00:18:30.307 killing process with pid 80743 00:18:30.307 22:29:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:30.307 22:29:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:30.307 22:29:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80743' 00:18:30.307 22:29:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 80743 00:18:30.307 22:29:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 80743 00:18:30.579 Connection closed with partial response: 00:18:30.579 00:18:30.579 00:18:30.579 22:29:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80743 00:18:30.579 22:29:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:30.579 [2024-07-15 22:28:47.627702] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:18:30.579 [2024-07-15 22:28:47.627833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80743 ] 00:18:30.579 [2024-07-15 22:28:47.764180] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.579 [2024-07-15 22:28:47.865115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:30.579 [2024-07-15 22:28:47.907160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:30.579 Running I/O for 90 seconds... 00:18:30.579 [2024-07-15 22:28:57.453238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.579 [2024-07-15 22:28:57.453312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.453360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.579 [2024-07-15 22:28:57.453388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.453406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.579 [2024-07-15 22:28:57.453419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.453436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.579 [2024-07-15 22:28:57.453448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.453466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.579 [2024-07-15 22:28:57.453479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.453496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.579 [2024-07-15 22:28:57.453508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.453526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.579 [2024-07-15 22:28:57.453538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.453556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.579 [2024-07-15 22:28:57.453568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.453586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.579 [2024-07-15 22:28:57.453611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.453629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.579 [2024-07-15 22:28:57.453641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.453659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.579 [2024-07-15 22:28:57.453686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.453704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:59536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.579 [2024-07-15 22:28:57.453716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.453734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.579 [2024-07-15 22:28:57.453746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.453764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.579 [2024-07-15 22:28:57.453776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.453794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.579 [2024-07-15 22:28:57.453806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.453823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.579 [2024-07-15 22:28:57.453835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.453853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.579 [2024-07-15 22:28:57.453866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.453883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.579 [2024-07-15 22:28:57.453896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.453913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.579 [2024-07-15 22:28:57.453925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.453943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.579 [2024-07-15 22:28:57.453955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.453973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.579 [2024-07-15 22:28:57.453985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.454003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.579 [2024-07-15 22:28:57.454016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.454033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.579 [2024-07-15 22:28:57.454050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.454068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.579 [2024-07-15 22:28:57.454081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.454103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.579 [2024-07-15 22:28:57.454117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.454135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.579 [2024-07-15 22:28:57.454148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.454165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.579 [2024-07-15 22:28:57.454178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:30.579 [2024-07-15 22:28:57.454196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.580 [2024-07-15 22:28:57.454208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.454227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.580 [2024-07-15 22:28:57.454239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.454257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.580 [2024-07-15 22:28:57.454270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.454288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.580 [2024-07-15 22:28:57.454301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.454318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.580 [2024-07-15 22:28:57.454331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.454349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.580 [2024-07-15 22:28:57.454361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.454379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.580 [2024-07-15 22:28:57.454391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.454416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.580 [2024-07-15 22:28:57.454429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.454452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.580 [2024-07-15 22:28:57.454465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.454483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.580 [2024-07-15 22:28:57.454496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.454514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.580 [2024-07-15 22:28:57.454527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.454545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.580 [2024-07-15 22:28:57.454558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.454575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.580 [2024-07-15 22:28:57.454588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.454614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.580 [2024-07-15 22:28:57.454640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.454659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.580 [2024-07-15 22:28:57.454672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.454690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.580 [2024-07-15 22:28:57.454703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.454723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.580 [2024-07-15 22:28:57.454737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.454754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.580 [2024-07-15 22:28:57.454767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.454785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.580 [2024-07-15 22:28:57.454798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.454816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.580 [2024-07-15 22:28:57.454828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.454851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.580 [2024-07-15 22:28:57.454864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.454882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.580 [2024-07-15 22:28:57.454894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.454912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.580 [2024-07-15 22:28:57.454925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.454944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.580 [2024-07-15 22:28:57.454957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.454975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.580 [2024-07-15 22:28:57.454987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.455005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.580 [2024-07-15 22:28:57.455018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.455036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.580 [2024-07-15 22:28:57.455049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.455067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.580 [2024-07-15 22:28:57.455079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.455098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.580 [2024-07-15 22:28:57.455110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.455156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.580 [2024-07-15 22:28:57.455172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.455190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.580 [2024-07-15 22:28:57.455203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.455220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.580 [2024-07-15 22:28:57.455233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.455260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.580 [2024-07-15 22:28:57.455273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.455290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.580 [2024-07-15 22:28:57.455303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.455321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.580 [2024-07-15 22:28:57.455334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.455352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.580 [2024-07-15 22:28:57.455365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.455382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.580 [2024-07-15 22:28:57.455395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.455413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.580 [2024-07-15 22:28:57.455425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.455443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.580 [2024-07-15 22:28:57.455455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.455475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.580 [2024-07-15 22:28:57.455488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:30.580 [2024-07-15 22:28:57.455506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.580 [2024-07-15 22:28:57.455519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.455537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.581 [2024-07-15 22:28:57.455549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.455567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.581 [2024-07-15 22:28:57.455579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.455608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.581 [2024-07-15 22:28:57.455621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.455639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.581 [2024-07-15 22:28:57.455657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.455676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.581 [2024-07-15 22:28:57.455689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.455707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.581 [2024-07-15 22:28:57.455720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.455738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.581 [2024-07-15 22:28:57.455750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.455768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.581 [2024-07-15 22:28:57.455781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.455798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.581 [2024-07-15 22:28:57.455811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.455829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.581 [2024-07-15 22:28:57.455842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.455860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.581 [2024-07-15 22:28:57.455872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.455890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.581 [2024-07-15 22:28:57.455903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.455921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.581 [2024-07-15 22:28:57.455934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.455952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.581 [2024-07-15 22:28:57.455965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.455984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.581 [2024-07-15 22:28:57.455997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.456015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.581 [2024-07-15 22:28:57.456032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.456050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.581 [2024-07-15 22:28:57.456062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.456080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.581 [2024-07-15 22:28:57.456092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.456110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.581 [2024-07-15 22:28:57.456123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.456141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.581 [2024-07-15 22:28:57.456153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.456174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.581 [2024-07-15 22:28:57.456191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.456212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.581 [2024-07-15 22:28:57.456225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.456243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.581 [2024-07-15 22:28:57.456256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.456274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.581 [2024-07-15 22:28:57.456286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.456305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.581 [2024-07-15 22:28:57.456318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.456336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.581 [2024-07-15 22:28:57.456348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.456367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.581 [2024-07-15 22:28:57.456379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.456397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.581 [2024-07-15 22:28:57.456409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.456432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.581 [2024-07-15 22:28:57.456444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.456462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.581 [2024-07-15 22:28:57.456474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.456494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.581 [2024-07-15 22:28:57.456506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.456524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.581 [2024-07-15 22:28:57.456536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.456554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.581 [2024-07-15 22:28:57.456567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.456585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.581 [2024-07-15 22:28:57.456607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.456625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.581 [2024-07-15 22:28:57.456638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.456656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.581 [2024-07-15 22:28:57.456668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.456686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.581 [2024-07-15 22:28:57.456702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.456720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.581 [2024-07-15 22:28:57.456732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.456750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.581 [2024-07-15 22:28:57.456763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:30.581 [2024-07-15 22:28:57.456781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.582 [2024-07-15 22:28:57.456794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:28:57.456820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.582 [2024-07-15 22:28:57.456833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:28:57.456851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.582 [2024-07-15 22:28:57.456863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:28:57.456882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.582 [2024-07-15 22:28:57.456894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:28:57.458048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.582 [2024-07-15 22:28:57.458079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:28:57.458104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:28:57.458117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:28:57.458136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:28:57.458148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:28:57.458168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:28:57.458180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:28:57.458198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:28:57.458211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:28:57.458229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:28:57.458241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:28:57.458259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:28:57.458272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:28:57.458290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:28:57.458302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:28:57.458419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:28:57.458436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:28:57.458455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:28:57.458478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:28:57.458496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:28:57.458509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:28:57.458526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:28:57.458539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:28:57.458557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:28:57.458569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:28:57.458587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:28:57.458610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:28:57.458628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:28:57.458641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:28:57.458660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:28:57.458672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:28:57.458692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:28:57.458705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:29:03.979546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:29:03.979624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:29:03.979655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:29:03.979668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:29:03.979687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:29:03.979700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:29:03.979719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:29:03.979732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:29:03.979750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:29:03.979778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:29:03.979797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:126760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:29:03.979810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:29:03.979828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:29:03.979841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:29:03.979860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:126776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:29:03.979873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:29:03.979891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:126784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:29:03.979904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:29:03.979922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:126792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:29:03.979935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:29:03.979954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:29:03.979967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:29:03.979986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:126808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:29:03.979999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:29:03.980017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:126816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:29:03.980030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:29:03.980048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:29:03.980060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:29:03.980079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:29:03.980092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:29:03.980110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:126840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.582 [2024-07-15 22:29:03.980123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:29:03.980142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.582 [2024-07-15 22:29:03.980155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:29:03.980180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.582 [2024-07-15 22:29:03.980194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:29:03.980212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.582 [2024-07-15 22:29:03.980225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:29:03.980245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.582 [2024-07-15 22:29:03.980258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:30.582 [2024-07-15 22:29:03.980277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.583 [2024-07-15 22:29:03.980290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.980309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.583 [2024-07-15 22:29:03.980322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.980341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.583 [2024-07-15 22:29:03.980354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.980372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.583 [2024-07-15 22:29:03.980385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.980417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.583 [2024-07-15 22:29:03.980431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.980450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:126856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.583 [2024-07-15 22:29:03.980462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.980481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.583 [2024-07-15 22:29:03.980494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.980512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.583 [2024-07-15 22:29:03.980525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.980544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.583 [2024-07-15 22:29:03.980557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.980581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:126888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.583 [2024-07-15 22:29:03.980594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.980621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:126896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.583 [2024-07-15 22:29:03.980634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.980653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:126904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.583 [2024-07-15 22:29:03.980666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.980685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.583 [2024-07-15 22:29:03.980698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.980718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.583 [2024-07-15 22:29:03.980732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.980751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:126928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.583 [2024-07-15 22:29:03.980764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.980783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:126936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.583 [2024-07-15 22:29:03.980796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.980815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.583 [2024-07-15 22:29:03.980829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.980847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:126952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.583 [2024-07-15 22:29:03.980861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.980879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.583 [2024-07-15 22:29:03.980892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.980911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.583 [2024-07-15 22:29:03.980924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.980943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.583 [2024-07-15 22:29:03.980956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.980975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.583 [2024-07-15 22:29:03.980992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.981011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.583 [2024-07-15 22:29:03.981024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.981043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.583 [2024-07-15 22:29:03.981056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.981075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.583 [2024-07-15 22:29:03.981088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.981107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.583 [2024-07-15 22:29:03.981120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.981139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.583 [2024-07-15 22:29:03.981152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.981170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.583 [2024-07-15 22:29:03.981184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.981202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.583 [2024-07-15 22:29:03.981216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.981235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.583 [2024-07-15 22:29:03.981248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.981268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.583 [2024-07-15 22:29:03.981281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.981300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.583 [2024-07-15 22:29:03.981313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.981332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.583 [2024-07-15 22:29:03.981348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.981380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.583 [2024-07-15 22:29:03.981398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:30.583 [2024-07-15 22:29:03.981417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.583 [2024-07-15 22:29:03.981430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.981450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.584 [2024-07-15 22:29:03.981480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.981502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.981517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.981536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.981549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.981568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:126992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.981582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.981602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.981624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.981644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.981657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.981677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.981690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.981711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:127024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.981725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.981744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:127032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.981758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.981777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.981791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.981811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.981829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.981907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.981921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.981941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.981954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.981974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:127072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.981988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.982022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.982055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:127096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.982087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.584 [2024-07-15 22:29:03.982120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.584 [2024-07-15 22:29:03.982153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.584 [2024-07-15 22:29:03.982186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.584 [2024-07-15 22:29:03.982219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.584 [2024-07-15 22:29:03.982252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.584 [2024-07-15 22:29:03.982285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.584 [2024-07-15 22:29:03.982324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.584 [2024-07-15 22:29:03.982357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:127104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.982390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:127112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.982423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:127120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.982456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:127128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.982490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:127136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.982523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:127144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.982556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:127152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.982589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.982633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:127168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.982670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.982703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.982742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:127192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.982775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.982809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.982842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.982875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.584 [2024-07-15 22:29:03.982909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:30.584 [2024-07-15 22:29:03.982928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.585 [2024-07-15 22:29:03.982942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.982961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.585 [2024-07-15 22:29:03.982975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.982995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.585 [2024-07-15 22:29:03.983009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.983029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.585 [2024-07-15 22:29:03.983043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.983062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.585 [2024-07-15 22:29:03.983077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.983097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.585 [2024-07-15 22:29:03.983110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.983130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.585 [2024-07-15 22:29:03.983148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.983168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.585 [2024-07-15 22:29:03.983182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.983201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.585 [2024-07-15 22:29:03.983215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.983234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.585 [2024-07-15 22:29:03.983248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.983267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.585 [2024-07-15 22:29:03.983281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.983301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.585 [2024-07-15 22:29:03.983314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.983334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.585 [2024-07-15 22:29:03.983347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.983367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.585 [2024-07-15 22:29:03.983380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.983400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.585 [2024-07-15 22:29:03.983413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.984578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.585 [2024-07-15 22:29:03.984623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.984649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.585 [2024-07-15 22:29:03.984663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.984683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.585 [2024-07-15 22:29:03.984697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.984717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:127248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.585 [2024-07-15 22:29:03.984741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.984761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.585 [2024-07-15 22:29:03.984775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.984794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:127264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.585 [2024-07-15 22:29:03.984808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.984827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.585 [2024-07-15 22:29:03.984841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.984861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:127280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.585 [2024-07-15 22:29:03.984875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.984905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.585 [2024-07-15 22:29:03.984919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.984939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:127296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.585 [2024-07-15 22:29:03.984952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.984972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:127304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.585 [2024-07-15 22:29:03.984985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.985005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.585 [2024-07-15 22:29:03.985018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.985037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.585 [2024-07-15 22:29:03.985051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.985070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.585 [2024-07-15 22:29:03.985083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.985102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.585 [2024-07-15 22:29:03.985116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.985139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.585 [2024-07-15 22:29:03.985158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.985180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:127352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.585 [2024-07-15 22:29:03.985194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.985213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.585 [2024-07-15 22:29:03.985227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.985246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.585 [2024-07-15 22:29:03.985260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.985279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.585 [2024-07-15 22:29:03.985292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.985312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.585 [2024-07-15 22:29:03.985325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.985345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.585 [2024-07-15 22:29:03.985360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.985394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.585 [2024-07-15 22:29:03.985408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.985428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.585 [2024-07-15 22:29:03.985441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.985469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.585 [2024-07-15 22:29:03.985484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.985503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:126784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.585 [2024-07-15 22:29:03.985517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:30.585 [2024-07-15 22:29:03.985536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:126792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.586 [2024-07-15 22:29:03.985550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:03.985569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.586 [2024-07-15 22:29:03.985582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:03.985617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:126808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.586 [2024-07-15 22:29:03.985631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:03.985651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:126816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.586 [2024-07-15 22:29:03.985665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:03.985684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:126824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.586 [2024-07-15 22:29:03.985698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:03.985719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.586 [2024-07-15 22:29:03.985733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:03.986077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.586 [2024-07-15 22:29:03.986096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:03.986118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.586 [2024-07-15 22:29:03.986132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:03.986152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.586 [2024-07-15 22:29:03.986166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:03.986186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.586 [2024-07-15 22:29:03.986199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:03.986219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.586 [2024-07-15 22:29:03.986233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:03.986252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.586 [2024-07-15 22:29:03.986267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:03.986287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.586 [2024-07-15 22:29:03.986300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:03.986320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.586 [2024-07-15 22:29:03.986333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:03.986356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.586 [2024-07-15 22:29:03.986373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:03.986393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.586 [2024-07-15 22:29:03.986406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:03.986426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.586 [2024-07-15 22:29:03.986439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:03.986458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.586 [2024-07-15 22:29:03.986472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:03.986491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.586 [2024-07-15 22:29:03.986505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:03.986525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.586 [2024-07-15 22:29:03.986538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:03.986558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:126888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.586 [2024-07-15 22:29:03.986571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:03.986592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:126896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.586 [2024-07-15 22:29:03.986620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:03.986642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:126904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.586 [2024-07-15 22:29:03.986656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:03.986675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.586 [2024-07-15 22:29:03.986689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:04.001545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.586 [2024-07-15 22:29:04.001617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:04.001648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:126928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.586 [2024-07-15 22:29:04.001668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:04.001694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:126936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.586 [2024-07-15 22:29:04.001735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:04.001761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.586 [2024-07-15 22:29:04.001780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:04.001806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:126952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.586 [2024-07-15 22:29:04.001824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:04.001851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.586 [2024-07-15 22:29:04.001869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:04.001920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.586 [2024-07-15 22:29:04.001940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:04.001971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.586 [2024-07-15 22:29:04.001989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:04.002016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.586 [2024-07-15 22:29:04.002034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:04.002061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.586 [2024-07-15 22:29:04.002079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:04.002105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.586 [2024-07-15 22:29:04.002123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:04.002149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.586 [2024-07-15 22:29:04.002167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:04.002193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.586 [2024-07-15 22:29:04.002212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:04.002239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.586 [2024-07-15 22:29:04.002258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:04.002283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.586 [2024-07-15 22:29:04.002309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:04.002335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.586 [2024-07-15 22:29:04.002354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:04.002380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.586 [2024-07-15 22:29:04.002398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:30.586 [2024-07-15 22:29:04.002424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.586 [2024-07-15 22:29:04.002443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.002469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.587 [2024-07-15 22:29:04.002488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.002514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.587 [2024-07-15 22:29:04.002533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.002559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.587 [2024-07-15 22:29:04.002578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.002616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.587 [2024-07-15 22:29:04.002635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.002661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.587 [2024-07-15 22:29:04.002679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.002706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.587 [2024-07-15 22:29:04.002724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.002750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.587 [2024-07-15 22:29:04.002769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.002795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:126992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.587 [2024-07-15 22:29:04.002813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.002840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.587 [2024-07-15 22:29:04.002865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.002891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.587 [2024-07-15 22:29:04.002910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.002936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.587 [2024-07-15 22:29:04.002954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.002980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:127024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.587 [2024-07-15 22:29:04.002999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.003024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:127032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.587 [2024-07-15 22:29:04.003042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.003068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.587 [2024-07-15 22:29:04.003087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.003113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.587 [2024-07-15 22:29:04.003132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.003157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.587 [2024-07-15 22:29:04.003175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.003202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.587 [2024-07-15 22:29:04.003220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.003245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:127072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.587 [2024-07-15 22:29:04.003264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.003290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.587 [2024-07-15 22:29:04.003308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.003334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.587 [2024-07-15 22:29:04.003352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.003378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:127096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.587 [2024-07-15 22:29:04.003397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.003429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.587 [2024-07-15 22:29:04.003447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.003474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.587 [2024-07-15 22:29:04.003492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.003518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.587 [2024-07-15 22:29:04.003536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.003563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.587 [2024-07-15 22:29:04.003582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.003619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.587 [2024-07-15 22:29:04.003638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.003664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.587 [2024-07-15 22:29:04.003682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.003708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.587 [2024-07-15 22:29:04.003726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.003752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.587 [2024-07-15 22:29:04.003770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.003796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.587 [2024-07-15 22:29:04.003815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.003840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:127112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.587 [2024-07-15 22:29:04.003858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.003885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:127120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.587 [2024-07-15 22:29:04.003903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.003929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:127128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.587 [2024-07-15 22:29:04.003947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.003979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:127136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.587 [2024-07-15 22:29:04.003998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.004024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:127144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.587 [2024-07-15 22:29:04.004042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.004068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:127152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.587 [2024-07-15 22:29:04.004086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:30.587 [2024-07-15 22:29:04.004113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.588 [2024-07-15 22:29:04.004131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.004158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:127168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.588 [2024-07-15 22:29:04.004176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.004202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:127176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.588 [2024-07-15 22:29:04.004220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.004246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.588 [2024-07-15 22:29:04.004265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.004291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:127192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.588 [2024-07-15 22:29:04.004309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.004335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.588 [2024-07-15 22:29:04.004353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.004380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.588 [2024-07-15 22:29:04.004398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.004425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.588 [2024-07-15 22:29:04.004443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.004469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:127224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.588 [2024-07-15 22:29:04.004487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.004513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.588 [2024-07-15 22:29:04.004537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.004563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.588 [2024-07-15 22:29:04.004582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.004618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.588 [2024-07-15 22:29:04.004638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.004664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.588 [2024-07-15 22:29:04.004682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.004708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.588 [2024-07-15 22:29:04.004726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.004753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.588 [2024-07-15 22:29:04.004771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.004797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.588 [2024-07-15 22:29:04.004815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.004842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.588 [2024-07-15 22:29:04.004861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.004890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.588 [2024-07-15 22:29:04.004909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.004936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.588 [2024-07-15 22:29:04.004955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.004981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.588 [2024-07-15 22:29:04.004999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.005025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.588 [2024-07-15 22:29:04.005044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.005071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.588 [2024-07-15 22:29:04.005096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.005122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.588 [2024-07-15 22:29:04.005141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.005167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.588 [2024-07-15 22:29:04.005185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.005211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.588 [2024-07-15 22:29:04.005230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.005256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.588 [2024-07-15 22:29:04.005275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.005301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.588 [2024-07-15 22:29:04.005320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.005346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:127248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.588 [2024-07-15 22:29:04.005381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.005408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:127256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.588 [2024-07-15 22:29:04.005427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.005453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:127264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.588 [2024-07-15 22:29:04.005471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.005499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.588 [2024-07-15 22:29:04.005518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.005543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:127280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.588 [2024-07-15 22:29:04.005563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.005589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.588 [2024-07-15 22:29:04.005617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.005646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.588 [2024-07-15 22:29:04.005671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.005698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:127304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.588 [2024-07-15 22:29:04.005716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.005742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.588 [2024-07-15 22:29:04.005761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.005789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.588 [2024-07-15 22:29:04.005808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.005834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.588 [2024-07-15 22:29:04.005853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.005879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.588 [2024-07-15 22:29:04.005897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:30.588 [2024-07-15 22:29:04.005923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.588 [2024-07-15 22:29:04.005941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.005967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:127352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.005986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.006012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.006030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.006056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.006075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.006101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.006119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.006145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.006163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.006190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.006208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.006241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:126760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.006259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.006285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.006304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.008248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:126776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.008290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.008326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:126784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.008345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.008371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:126792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.008390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.008416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.008434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.008461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:126808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.008479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.008505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:126816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.008524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.008550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:126824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.008568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.008593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.008626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.008652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:126840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.008670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.008696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.589 [2024-07-15 22:29:04.008714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.008758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.589 [2024-07-15 22:29:04.008777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.008803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.589 [2024-07-15 22:29:04.008821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.008847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.589 [2024-07-15 22:29:04.008865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.008891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.589 [2024-07-15 22:29:04.008908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.008934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.589 [2024-07-15 22:29:04.008952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.008978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.589 [2024-07-15 22:29:04.008996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.009022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.589 [2024-07-15 22:29:04.009040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.009067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.009085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.009111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:126856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.009129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.009155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.009173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.009198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.009217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.009242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.009260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.009286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.009311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.009338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:126896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.009356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.009402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.009420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.009446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.009477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.009501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:126920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.009517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.009542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:126928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.009559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.009583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:126936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.009600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.009633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:126944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.009650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.009674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:126952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.009691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.009715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.009732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.009755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.589 [2024-07-15 22:29:04.009773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:30.589 [2024-07-15 22:29:04.009797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.590 [2024-07-15 22:29:04.009814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.009838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.590 [2024-07-15 22:29:04.009861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.009886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.590 [2024-07-15 22:29:04.009902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.009926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.590 [2024-07-15 22:29:04.009943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.009968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.590 [2024-07-15 22:29:04.009984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.010008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.590 [2024-07-15 22:29:04.010025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.010049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.590 [2024-07-15 22:29:04.010065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.010089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.590 [2024-07-15 22:29:04.010106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.010130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.590 [2024-07-15 22:29:04.010147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.010170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.590 [2024-07-15 22:29:04.010187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.010211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.590 [2024-07-15 22:29:04.010228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.010252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.590 [2024-07-15 22:29:04.010269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.010293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.590 [2024-07-15 22:29:04.010310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.010334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.590 [2024-07-15 22:29:04.010351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.010380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.590 [2024-07-15 22:29:04.010396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.010421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.590 [2024-07-15 22:29:04.010437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.010461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.590 [2024-07-15 22:29:04.010479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.010502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:126984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.590 [2024-07-15 22:29:04.010519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.010543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:126992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.590 [2024-07-15 22:29:04.010560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.010584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.590 [2024-07-15 22:29:04.010609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.010634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.590 [2024-07-15 22:29:04.010652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.010676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.590 [2024-07-15 22:29:04.010693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.010717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:127024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.590 [2024-07-15 22:29:04.010734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.010761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:127032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.590 [2024-07-15 22:29:04.010778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.010802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.590 [2024-07-15 22:29:04.010818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.010843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.590 [2024-07-15 22:29:04.010859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.010889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.590 [2024-07-15 22:29:04.010906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.010930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.590 [2024-07-15 22:29:04.010947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.010971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.590 [2024-07-15 22:29:04.010987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.011012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.590 [2024-07-15 22:29:04.011029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.011053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.590 [2024-07-15 22:29:04.011069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.011093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:127096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.590 [2024-07-15 22:29:04.011110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.011135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.590 [2024-07-15 22:29:04.011152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.011176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.590 [2024-07-15 22:29:04.011193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.011217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.590 [2024-07-15 22:29:04.011234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.011258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.590 [2024-07-15 22:29:04.011275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.011299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.590 [2024-07-15 22:29:04.011315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.011339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.590 [2024-07-15 22:29:04.011356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.011385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.590 [2024-07-15 22:29:04.011402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.011426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.590 [2024-07-15 22:29:04.011443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.011467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:127104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.590 [2024-07-15 22:29:04.011484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.011507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:127112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.590 [2024-07-15 22:29:04.011524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:30.590 [2024-07-15 22:29:04.011547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:127120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.591 [2024-07-15 22:29:04.011564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.011588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:127128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.591 [2024-07-15 22:29:04.011616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.011640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.591 [2024-07-15 22:29:04.011657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.011681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:127144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.591 [2024-07-15 22:29:04.011698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.011722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:127152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.591 [2024-07-15 22:29:04.011738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.011762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.591 [2024-07-15 22:29:04.011779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.011804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:127168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.591 [2024-07-15 22:29:04.011820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.011844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:127176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.591 [2024-07-15 22:29:04.011861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.011885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.591 [2024-07-15 22:29:04.011909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.011934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:127192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.591 [2024-07-15 22:29:04.011950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.011974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.591 [2024-07-15 22:29:04.011990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.012014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.591 [2024-07-15 22:29:04.012031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.012055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.591 [2024-07-15 22:29:04.012072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.012099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:127224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.591 [2024-07-15 22:29:04.012116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.012140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.591 [2024-07-15 22:29:04.012157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.012181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.591 [2024-07-15 22:29:04.012197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.012221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.591 [2024-07-15 22:29:04.012238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.012262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.591 [2024-07-15 22:29:04.012279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.012303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.591 [2024-07-15 22:29:04.012319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.012343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.591 [2024-07-15 22:29:04.012360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.012384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.591 [2024-07-15 22:29:04.012406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.012431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.591 [2024-07-15 22:29:04.012447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.012472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.591 [2024-07-15 22:29:04.012489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.012514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.591 [2024-07-15 22:29:04.012530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.012555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.591 [2024-07-15 22:29:04.012571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.012606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.591 [2024-07-15 22:29:04.012623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.012647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.591 [2024-07-15 22:29:04.012664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.012688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.591 [2024-07-15 22:29:04.012705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.012729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.591 [2024-07-15 22:29:04.012746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.012770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.591 [2024-07-15 22:29:04.012786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.012810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.591 [2024-07-15 22:29:04.012827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.012851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.591 [2024-07-15 22:29:04.012868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.012892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:127248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.591 [2024-07-15 22:29:04.012908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.012938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:127256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.591 [2024-07-15 22:29:04.012955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.012979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:127264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.591 [2024-07-15 22:29:04.012995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:30.591 [2024-07-15 22:29:04.013019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.591 [2024-07-15 22:29:04.013036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.013060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:127280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.013076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.013100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.013117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.013141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:127296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.013158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.013182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:127304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.013199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.013223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.013239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.013263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.013280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.013304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.013321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.013345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.013361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.013394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:127344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.013412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.013441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:127352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.013458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.013482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.013499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.013523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.013539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.013563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.013580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.013613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.013631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.013655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.013672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.013696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.013713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.015441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.015476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.015506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:126776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.015523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.015548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:126784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.015565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.015589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.015619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.015643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.015660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.015698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:126808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.015716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.015739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:126816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.015756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.015780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:126824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.015797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.015821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.015837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.015861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.015878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.015902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.592 [2024-07-15 22:29:04.015919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.015943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.592 [2024-07-15 22:29:04.015960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.015983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.592 [2024-07-15 22:29:04.016000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.016024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.592 [2024-07-15 22:29:04.016041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.016064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.592 [2024-07-15 22:29:04.016081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.016105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.592 [2024-07-15 22:29:04.016122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.016146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.592 [2024-07-15 22:29:04.016163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.016186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.592 [2024-07-15 22:29:04.016209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.016233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.016250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.016274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:126856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.016290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.016314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.016331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.016356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.016372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.016397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.016414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.016441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:126888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.016459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.016483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:126896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.016500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:30.592 [2024-07-15 22:29:04.016523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:126904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.592 [2024-07-15 22:29:04.016540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.016564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.593 [2024-07-15 22:29:04.016581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.016614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:126920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.593 [2024-07-15 22:29:04.016632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.016656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:126928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.593 [2024-07-15 22:29:04.016672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.016696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:126936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.593 [2024-07-15 22:29:04.016719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.016743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:126944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.593 [2024-07-15 22:29:04.016759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.016784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:126952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.593 [2024-07-15 22:29:04.016800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.016824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.593 [2024-07-15 22:29:04.016840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.016864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.593 [2024-07-15 22:29:04.016881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.016905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.593 [2024-07-15 22:29:04.016922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.016945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.593 [2024-07-15 22:29:04.016962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.016986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.593 [2024-07-15 22:29:04.017003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.017027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.593 [2024-07-15 22:29:04.017044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.017068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.593 [2024-07-15 22:29:04.017084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.017108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.593 [2024-07-15 22:29:04.017125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.017150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.593 [2024-07-15 22:29:04.017167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.017191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.593 [2024-07-15 22:29:04.017208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.017237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.593 [2024-07-15 22:29:04.017254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.017277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.593 [2024-07-15 22:29:04.017294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.017318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.593 [2024-07-15 22:29:04.017335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.017359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.593 [2024-07-15 22:29:04.017389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.017413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.593 [2024-07-15 22:29:04.017430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.017454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.593 [2024-07-15 22:29:04.017471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.017495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.593 [2024-07-15 22:29:04.017511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.017535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.593 [2024-07-15 22:29:04.017552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.017576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.593 [2024-07-15 22:29:04.017593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.017626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:126984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.593 [2024-07-15 22:29:04.017643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.017667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:126992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.593 [2024-07-15 22:29:04.017684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.017708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.593 [2024-07-15 22:29:04.017725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.017755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.593 [2024-07-15 22:29:04.017772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.017799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.593 [2024-07-15 22:29:04.017816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.017840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:127024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.593 [2024-07-15 22:29:04.017857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.017881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:127032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.593 [2024-07-15 22:29:04.017897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.017922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.593 [2024-07-15 22:29:04.017938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.017962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.593 [2024-07-15 22:29:04.017978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.018002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.593 [2024-07-15 22:29:04.018019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.018043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.593 [2024-07-15 22:29:04.018059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.018083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:127072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.593 [2024-07-15 22:29:04.018100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.018125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.593 [2024-07-15 22:29:04.018141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.018165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.593 [2024-07-15 22:29:04.018182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.018206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.593 [2024-07-15 22:29:04.018222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.018252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.593 [2024-07-15 22:29:04.018269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:30.593 [2024-07-15 22:29:04.018293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.593 [2024-07-15 22:29:04.018310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.018333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.594 [2024-07-15 22:29:04.018350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.018374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.594 [2024-07-15 22:29:04.018391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.018415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.594 [2024-07-15 22:29:04.018432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.018456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.594 [2024-07-15 22:29:04.018472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.018497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.594 [2024-07-15 22:29:04.018513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.018537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.594 [2024-07-15 22:29:04.018554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.018578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:127104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.594 [2024-07-15 22:29:04.018605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.018630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:127112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.594 [2024-07-15 22:29:04.018647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.018670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:127120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.594 [2024-07-15 22:29:04.018687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.018711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.594 [2024-07-15 22:29:04.018728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.018752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:127136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.594 [2024-07-15 22:29:04.018774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.018798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:127144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.594 [2024-07-15 22:29:04.018815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.018838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:127152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.594 [2024-07-15 22:29:04.018856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.018880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.594 [2024-07-15 22:29:04.018897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.018920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:127168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.594 [2024-07-15 22:29:04.018937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.018961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:127176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.594 [2024-07-15 22:29:04.018979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.019003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.594 [2024-07-15 22:29:04.019020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.019043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:127192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.594 [2024-07-15 22:29:04.019060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.019085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.594 [2024-07-15 22:29:04.019102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.019129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.594 [2024-07-15 22:29:04.019146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.019170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.594 [2024-07-15 22:29:04.019187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.019211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:127224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.594 [2024-07-15 22:29:04.019227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.019252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.594 [2024-07-15 22:29:04.019274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.019298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.594 [2024-07-15 22:29:04.019315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.019339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.594 [2024-07-15 22:29:04.019356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.019380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.594 [2024-07-15 22:29:04.019396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.019420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.594 [2024-07-15 22:29:04.019437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.019461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.594 [2024-07-15 22:29:04.019478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.019502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.594 [2024-07-15 22:29:04.019519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.019543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.594 [2024-07-15 22:29:04.019559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.019583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.594 [2024-07-15 22:29:04.019610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.019634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.594 [2024-07-15 22:29:04.019651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.019675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.594 [2024-07-15 22:29:04.019691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.019715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.594 [2024-07-15 22:29:04.019732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.019765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:126688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.594 [2024-07-15 22:29:04.019779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.019803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.594 [2024-07-15 22:29:04.019817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.019836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.594 [2024-07-15 22:29:04.019850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.019870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.594 [2024-07-15 22:29:04.019883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.019903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.594 [2024-07-15 22:29:04.019916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.019948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.594 [2024-07-15 22:29:04.019961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.019980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:127248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.594 [2024-07-15 22:29:04.019993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:30.594 [2024-07-15 22:29:04.020012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:04.020026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:04.020045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:127264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:04.020058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:04.020077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:04.020090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:04.020109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:127280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:04.020122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:04.020141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:04.020155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:04.020174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:127296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:04.020187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:04.020210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:04.020223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:04.020242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:04.020256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:04.020275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:04.020289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:04.020307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:04.020321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:04.020340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:04.020353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:04.020372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:04.020385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:04.020404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:04.020417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:04.020436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:04.020450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:04.020468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:04.020481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:04.020501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:04.020514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:04.020550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:04.020563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:04.020583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:04.020597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:04.021026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:126760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:04.021051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:10.861867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:10.861935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:10.861983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:122768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:10.861998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:10.862017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:10.862031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:10.862050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:122784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:10.862063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:10.862082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:122792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:10.862095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:10.862113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:122800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:10.862126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:10.862145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:10.862158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:10.862177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:10.862190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:10.862212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:122824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:10.862226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:10.862244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:122832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:10.862257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:10.862275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:122840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:10.862289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:10.862307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:122848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:10.862339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:10.862358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:10.862371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:10.862390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:122864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:10.862403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:10.862422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:10.862436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:10.862454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.595 [2024-07-15 22:29:10.862467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:10.862486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.595 [2024-07-15 22:29:10.862499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:10.862518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.595 [2024-07-15 22:29:10.862531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:10.862549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.595 [2024-07-15 22:29:10.862562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:10.862581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.595 [2024-07-15 22:29:10.862594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:10.862623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.595 [2024-07-15 22:29:10.862636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:30.595 [2024-07-15 22:29:10.862655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.595 [2024-07-15 22:29:10.862668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.862686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.596 [2024-07-15 22:29:10.862699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.862718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.596 [2024-07-15 22:29:10.862736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.862755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.596 [2024-07-15 22:29:10.862769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.862788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.596 [2024-07-15 22:29:10.862801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.862821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.596 [2024-07-15 22:29:10.862835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.862853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.596 [2024-07-15 22:29:10.862867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.862885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.596 [2024-07-15 22:29:10.862898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.862917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.596 [2024-07-15 22:29:10.862930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.862949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.596 [2024-07-15 22:29:10.862963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.862982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.596 [2024-07-15 22:29:10.862996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.863017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.596 [2024-07-15 22:29:10.863031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.863050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.596 [2024-07-15 22:29:10.863063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.863082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.596 [2024-07-15 22:29:10.863095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.863114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.596 [2024-07-15 22:29:10.863127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.863151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.596 [2024-07-15 22:29:10.863165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.863184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.596 [2024-07-15 22:29:10.863198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.863217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.596 [2024-07-15 22:29:10.863230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.863249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.596 [2024-07-15 22:29:10.863262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.863281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.596 [2024-07-15 22:29:10.863294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.863313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.596 [2024-07-15 22:29:10.863326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.863345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.596 [2024-07-15 22:29:10.863358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.863377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.596 [2024-07-15 22:29:10.863390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.863409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.596 [2024-07-15 22:29:10.863422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.863441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.596 [2024-07-15 22:29:10.863454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.863474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.596 [2024-07-15 22:29:10.863487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.863506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.596 [2024-07-15 22:29:10.863519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.863542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.596 [2024-07-15 22:29:10.863555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.863574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.596 [2024-07-15 22:29:10.863587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.863614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.596 [2024-07-15 22:29:10.863628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.863646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.596 [2024-07-15 22:29:10.863660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.863678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.596 [2024-07-15 22:29:10.863693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.863712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.596 [2024-07-15 22:29:10.863725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.863744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.596 [2024-07-15 22:29:10.863757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.863776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.596 [2024-07-15 22:29:10.863789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.863808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.596 [2024-07-15 22:29:10.863821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:30.596 [2024-07-15 22:29:10.863840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.597 [2024-07-15 22:29:10.863853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.863872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.597 [2024-07-15 22:29:10.863885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.863904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.597 [2024-07-15 22:29:10.863917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.863936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.597 [2024-07-15 22:29:10.863953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.863972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:123056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.597 [2024-07-15 22:29:10.863985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:123064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.597 [2024-07-15 22:29:10.864018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.597 [2024-07-15 22:29:10.864051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.597 [2024-07-15 22:29:10.864085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.597 [2024-07-15 22:29:10.864117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.597 [2024-07-15 22:29:10.864149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.597 [2024-07-15 22:29:10.864181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:123112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.597 [2024-07-15 22:29:10.864214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:123120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.597 [2024-07-15 22:29:10.864246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:123128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.597 [2024-07-15 22:29:10.864278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:123136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.597 [2024-07-15 22:29:10.864310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:122504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.597 [2024-07-15 22:29:10.864347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:122512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.597 [2024-07-15 22:29:10.864379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:122520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.597 [2024-07-15 22:29:10.864412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:122528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.597 [2024-07-15 22:29:10.864444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.597 [2024-07-15 22:29:10.864476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.597 [2024-07-15 22:29:10.864508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.597 [2024-07-15 22:29:10.864540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.597 [2024-07-15 22:29:10.864572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.597 [2024-07-15 22:29:10.864613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.597 [2024-07-15 22:29:10.864645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.597 [2024-07-15 22:29:10.864678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.597 [2024-07-15 22:29:10.864710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.597 [2024-07-15 22:29:10.864746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.597 [2024-07-15 22:29:10.864779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.597 [2024-07-15 22:29:10.864811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.597 [2024-07-15 22:29:10.864843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.597 [2024-07-15 22:29:10.864877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.597 [2024-07-15 22:29:10.864909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.597 [2024-07-15 22:29:10.864941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.597 [2024-07-15 22:29:10.864973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.864992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.597 [2024-07-15 22:29:10.865005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.865024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.597 [2024-07-15 22:29:10.865037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.865056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.597 [2024-07-15 22:29:10.865070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.865089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.597 [2024-07-15 22:29:10.865102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.865121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.597 [2024-07-15 22:29:10.865134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.865157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.597 [2024-07-15 22:29:10.865171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:30.597 [2024-07-15 22:29:10.865189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.597 [2024-07-15 22:29:10.865202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.865221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.598 [2024-07-15 22:29:10.865234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.865254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.598 [2024-07-15 22:29:10.865267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.865286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.598 [2024-07-15 22:29:10.865299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.865318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:123256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.598 [2024-07-15 22:29:10.865331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.865350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:123264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.598 [2024-07-15 22:29:10.865363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.865390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.598 [2024-07-15 22:29:10.865403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.865422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.598 [2024-07-15 22:29:10.865436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.865455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.598 [2024-07-15 22:29:10.865468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.865487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.598 [2024-07-15 22:29:10.865501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.865520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.598 [2024-07-15 22:29:10.865533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.865557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.598 [2024-07-15 22:29:10.865571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.865590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.598 [2024-07-15 22:29:10.865610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.865629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.598 [2024-07-15 22:29:10.865642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.865661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.598 [2024-07-15 22:29:10.865674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.865693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.598 [2024-07-15 22:29:10.865706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.865725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.598 [2024-07-15 22:29:10.865738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.865757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.598 [2024-07-15 22:29:10.865770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.865789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.598 [2024-07-15 22:29:10.865807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.865826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.598 [2024-07-15 22:29:10.865839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.865858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.598 [2024-07-15 22:29:10.865871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.866420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.598 [2024-07-15 22:29:10.866445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.866474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.598 [2024-07-15 22:29:10.866487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.866523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.598 [2024-07-15 22:29:10.866536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.866563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.598 [2024-07-15 22:29:10.866577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.866614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:123296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.598 [2024-07-15 22:29:10.866628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.866654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:123304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.598 [2024-07-15 22:29:10.866668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.866694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.598 [2024-07-15 22:29:10.866707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.866741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:123320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.598 [2024-07-15 22:29:10.866754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:10.866791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.598 [2024-07-15 22:29:10.866806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:23.875982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:122560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.598 [2024-07-15 22:29:23.876042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:23.876098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:122568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.598 [2024-07-15 22:29:23.876112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:23.876130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.598 [2024-07-15 22:29:23.876143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:23.876160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:122584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.598 [2024-07-15 22:29:23.876173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:23.876190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:122592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.598 [2024-07-15 22:29:23.876203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:23.876220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:122600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.598 [2024-07-15 22:29:23.876254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:23.876272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:122608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.598 [2024-07-15 22:29:23.876285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:23.876302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.598 [2024-07-15 22:29:23.876314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:23.876332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:122624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.598 [2024-07-15 22:29:23.876344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:23.876362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:122632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.598 [2024-07-15 22:29:23.876374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:23.876391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:122640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.598 [2024-07-15 22:29:23.876403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:23.876420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:122648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.598 [2024-07-15 22:29:23.876433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:30.598 [2024-07-15 22:29:23.876450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.599 [2024-07-15 22:29:23.876462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.876480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:122664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.599 [2024-07-15 22:29:23.876492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.876509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:122672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.599 [2024-07-15 22:29:23.876521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.876539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:122680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.599 [2024-07-15 22:29:23.876551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.876569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:122176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.599 [2024-07-15 22:29:23.876581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.876610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.599 [2024-07-15 22:29:23.876628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.876646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:122192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.599 [2024-07-15 22:29:23.876659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.876676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.599 [2024-07-15 22:29:23.876689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.876706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.599 [2024-07-15 22:29:23.876719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.876737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.599 [2024-07-15 22:29:23.876749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.876767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.599 [2024-07-15 22:29:23.876779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.876797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.599 [2024-07-15 22:29:23.876809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.876826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.599 [2024-07-15 22:29:23.876838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.876856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.599 [2024-07-15 22:29:23.876868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.876885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.599 [2024-07-15 22:29:23.876897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.876915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.599 [2024-07-15 22:29:23.876927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.876945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.599 [2024-07-15 22:29:23.876957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.876975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.599 [2024-07-15 22:29:23.876987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.877009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.599 [2024-07-15 22:29:23.877021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.877039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.599 [2024-07-15 22:29:23.877052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.877093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.599 [2024-07-15 22:29:23.877107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.877121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:122696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.599 [2024-07-15 22:29:23.877134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.877147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:122704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.599 [2024-07-15 22:29:23.877160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.877173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.599 [2024-07-15 22:29:23.877185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.877199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:122720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.599 [2024-07-15 22:29:23.877211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.877225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:122728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.599 [2024-07-15 22:29:23.877237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.877251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:122736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.599 [2024-07-15 22:29:23.877263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.877276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:122744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.599 [2024-07-15 22:29:23.877288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.877302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.599 [2024-07-15 22:29:23.877314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.877327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.599 [2024-07-15 22:29:23.877339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.877353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:122768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.599 [2024-07-15 22:29:23.877378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.877392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.599 [2024-07-15 22:29:23.877404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.877417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:122784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.599 [2024-07-15 22:29:23.877430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.877443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:122792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.599 [2024-07-15 22:29:23.877455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.877469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:122800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.599 [2024-07-15 22:29:23.877481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.877494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.599 [2024-07-15 22:29:23.877506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.877520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:122304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.599 [2024-07-15 22:29:23.877532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.877545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.599 [2024-07-15 22:29:23.877558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.877571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.599 [2024-07-15 22:29:23.877583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.877605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.599 [2024-07-15 22:29:23.877618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.877631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.599 [2024-07-15 22:29:23.877643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.877656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.599 [2024-07-15 22:29:23.877669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.599 [2024-07-15 22:29:23.877682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.599 [2024-07-15 22:29:23.877694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.877711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.600 [2024-07-15 22:29:23.877723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.877737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.600 [2024-07-15 22:29:23.877749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.877762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:122824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.600 [2024-07-15 22:29:23.877774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.877787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:122832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.600 [2024-07-15 22:29:23.877799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.877813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:122840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.600 [2024-07-15 22:29:23.877825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.877838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:122848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.600 [2024-07-15 22:29:23.877850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.877863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:122856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.600 [2024-07-15 22:29:23.877875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.877889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:122864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.600 [2024-07-15 22:29:23.877901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.877914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.600 [2024-07-15 22:29:23.877926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.877939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.600 [2024-07-15 22:29:23.877951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.877964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.600 [2024-07-15 22:29:23.877976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.877990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.600 [2024-07-15 22:29:23.878002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.878015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.600 [2024-07-15 22:29:23.878033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.878046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.600 [2024-07-15 22:29:23.878059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.878072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.600 [2024-07-15 22:29:23.878084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.878098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.600 [2024-07-15 22:29:23.878110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.878123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.600 [2024-07-15 22:29:23.878135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.878149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.600 [2024-07-15 22:29:23.878160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.878174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.600 [2024-07-15 22:29:23.878186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.878199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.600 [2024-07-15 22:29:23.878211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.878224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.600 [2024-07-15 22:29:23.878236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.878249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.600 [2024-07-15 22:29:23.878261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.878275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.600 [2024-07-15 22:29:23.878287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.878300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.600 [2024-07-15 22:29:23.878312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.878326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.600 [2024-07-15 22:29:23.878338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.878355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.600 [2024-07-15 22:29:23.878367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.878381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:122440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.600 [2024-07-15 22:29:23.878393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.878406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.600 [2024-07-15 22:29:23.878418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.878431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:122456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.600 [2024-07-15 22:29:23.878443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.878457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.600 [2024-07-15 22:29:23.878473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.878486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.600 [2024-07-15 22:29:23.878498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.878512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.600 [2024-07-15 22:29:23.878523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.878537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.600 [2024-07-15 22:29:23.878549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.878562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.600 [2024-07-15 22:29:23.878574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.878587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.600 [2024-07-15 22:29:23.878618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.878631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.600 [2024-07-15 22:29:23.878643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.878657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.600 [2024-07-15 22:29:23.878669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.600 [2024-07-15 22:29:23.878682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.601 [2024-07-15 22:29:23.878698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.878711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.601 [2024-07-15 22:29:23.878727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.878741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.601 [2024-07-15 22:29:23.878753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.878766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.601 [2024-07-15 22:29:23.878778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.878791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.601 [2024-07-15 22:29:23.878804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.878817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.601 [2024-07-15 22:29:23.878829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.878842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.601 [2024-07-15 22:29:23.878854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.878867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.601 [2024-07-15 22:29:23.878879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.878892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.601 [2024-07-15 22:29:23.878906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.878919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.601 [2024-07-15 22:29:23.878931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.878945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:123056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.601 [2024-07-15 22:29:23.878956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.878970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:123064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.601 [2024-07-15 22:29:23.878982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.878995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.601 [2024-07-15 22:29:23.879007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.879021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.601 [2024-07-15 22:29:23.879037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.879050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.601 [2024-07-15 22:29:23.879062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.879075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.601 [2024-07-15 22:29:23.879087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.879101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.601 [2024-07-15 22:29:23.879113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.879126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.601 [2024-07-15 22:29:23.879140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.879153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:122544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.601 [2024-07-15 22:29:23.879165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.879178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b4a10 is same with the state(5) to be set 00:18:30.601 [2024-07-15 22:29:23.879192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.601 [2024-07-15 22:29:23.879201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.601 [2024-07-15 22:29:23.879211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122552 len:8 PRP1 0x0 PRP2 0x0 00:18:30.601 [2024-07-15 22:29:23.879223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.879236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.601 [2024-07-15 22:29:23.879245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.601 [2024-07-15 22:29:23.879254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123072 len:8 PRP1 0x0 PRP2 0x0 00:18:30.601 [2024-07-15 22:29:23.879267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.879279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.601 [2024-07-15 22:29:23.879287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.601 [2024-07-15 22:29:23.879298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123080 len:8 PRP1 0x0 PRP2 0x0 00:18:30.601 [2024-07-15 22:29:23.879310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.879322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.601 [2024-07-15 22:29:23.879331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.601 [2024-07-15 22:29:23.879340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123088 len:8 PRP1 0x0 PRP2 0x0 00:18:30.601 [2024-07-15 22:29:23.879351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.879368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.601 [2024-07-15 22:29:23.879376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.601 [2024-07-15 22:29:23.879385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123096 len:8 PRP1 0x0 PRP2 0x0 00:18:30.601 [2024-07-15 22:29:23.879397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.879410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.601 [2024-07-15 22:29:23.879418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.601 [2024-07-15 22:29:23.879427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123104 len:8 PRP1 0x0 PRP2 0x0 00:18:30.601 [2024-07-15 22:29:23.879440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.879451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.601 [2024-07-15 22:29:23.879460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.601 [2024-07-15 22:29:23.879469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123112 len:8 PRP1 0x0 PRP2 0x0 00:18:30.601 [2024-07-15 22:29:23.879481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.879494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.601 [2024-07-15 22:29:23.879503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.601 [2024-07-15 22:29:23.879512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123120 len:8 PRP1 0x0 PRP2 0x0 00:18:30.601 [2024-07-15 22:29:23.879524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.879536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.601 [2024-07-15 22:29:23.879544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.601 [2024-07-15 22:29:23.879554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123128 len:8 PRP1 0x0 PRP2 0x0 00:18:30.601 [2024-07-15 22:29:23.879565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.879577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.601 [2024-07-15 22:29:23.879586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.601 [2024-07-15 22:29:23.879603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123136 len:8 PRP1 0x0 PRP2 0x0 00:18:30.601 [2024-07-15 22:29:23.879615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.879627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.601 [2024-07-15 22:29:23.879636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.601 [2024-07-15 22:29:23.879646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123144 len:8 PRP1 0x0 PRP2 0x0 00:18:30.601 [2024-07-15 22:29:23.879658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.879670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.601 [2024-07-15 22:29:23.879678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.601 [2024-07-15 22:29:23.879687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123152 len:8 PRP1 0x0 PRP2 0x0 00:18:30.601 [2024-07-15 22:29:23.879703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.879715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.601 [2024-07-15 22:29:23.879724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.601 [2024-07-15 22:29:23.879733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123160 len:8 PRP1 0x0 PRP2 0x0 00:18:30.601 [2024-07-15 22:29:23.879745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.601 [2024-07-15 22:29:23.879757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.601 [2024-07-15 22:29:23.879765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.601 [2024-07-15 22:29:23.879774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123168 len:8 PRP1 0x0 PRP2 0x0 00:18:30.602 [2024-07-15 22:29:23.879786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.602 [2024-07-15 22:29:23.879798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.602 [2024-07-15 22:29:23.879807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.602 [2024-07-15 22:29:23.879816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123176 len:8 PRP1 0x0 PRP2 0x0 00:18:30.602 [2024-07-15 22:29:23.879828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.602 [2024-07-15 22:29:23.879841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.602 [2024-07-15 22:29:23.879850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.602 [2024-07-15 22:29:23.879859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123184 len:8 PRP1 0x0 PRP2 0x0 00:18:30.602 [2024-07-15 22:29:23.879870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.602 [2024-07-15 22:29:23.879883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.602 [2024-07-15 22:29:23.879891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.602 [2024-07-15 22:29:23.879900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123192 len:8 PRP1 0x0 PRP2 0x0 00:18:30.602 [2024-07-15 22:29:23.879912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.602 [2024-07-15 22:29:23.879959] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24b4a10 was disconnected and freed. reset controller. 00:18:30.602 [2024-07-15 22:29:23.880844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:30.602 [2024-07-15 22:29:23.880908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.602 [2024-07-15 22:29:23.880924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.602 [2024-07-15 22:29:23.880949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24352a0 (9): Bad file descriptor 00:18:30.602 [2024-07-15 22:29:23.881249] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:30.602 [2024-07-15 22:29:23.881274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24352a0 with addr=10.0.0.2, port=4421 00:18:30.602 [2024-07-15 22:29:23.881290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24352a0 is same with the state(5) to be set 00:18:30.602 [2024-07-15 22:29:23.881351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24352a0 (9): Bad file descriptor 00:18:30.602 [2024-07-15 22:29:23.881385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:30.602 [2024-07-15 22:29:23.881398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:30.602 [2024-07-15 22:29:23.881411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:30.602 [2024-07-15 22:29:23.881435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:30.602 [2024-07-15 22:29:23.881446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:30.602 [2024-07-15 22:29:33.916767] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:30.602 Received shutdown signal, test time was about 54.471210 seconds 00:18:30.602 00:18:30.602 Latency(us) 00:18:30.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.602 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:30.602 Verification LBA range: start 0x0 length 0x4000 00:18:30.602 Nvme0n1 : 54.47 9496.87 37.10 0.00 0.00 13462.07 934.35 7061253.96 00:18:30.602 =================================================================================================================== 00:18:30.602 Total : 9496.87 37.10 0.00 0.00 13462.07 934.35 7061253.96 00:18:30.602 22:29:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:30.860 22:29:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:30.860 22:29:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:30.860 22:29:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:18:30.860 22:29:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:30.860 22:29:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:18:30.860 22:29:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:30.860 22:29:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:18:30.860 22:29:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:30.860 22:29:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:30.860 rmmod nvme_tcp 00:18:30.860 rmmod nvme_fabrics 00:18:30.860 rmmod nvme_keyring 00:18:30.860 22:29:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:30.860 22:29:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:18:30.860 22:29:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:18:30.860 22:29:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 80691 ']' 00:18:30.860 22:29:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 80691 00:18:30.860 22:29:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 80691 ']' 00:18:30.860 22:29:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 80691 00:18:30.860 22:29:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:18:30.860 22:29:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:30.860 22:29:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80691 00:18:30.860 22:29:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:30.861 22:29:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:30.861 22:29:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80691' 00:18:30.861 killing process with pid 80691 00:18:30.861 22:29:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 80691 00:18:30.861 22:29:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 80691 00:18:31.118 22:29:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:31.118 22:29:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:31.118 22:29:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:31.118 22:29:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:31.118 22:29:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:31.118 22:29:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.118 22:29:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.118 22:29:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.118 22:29:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:31.118 00:18:31.118 real 0m59.977s 00:18:31.118 user 2m40.955s 00:18:31.118 sys 0m22.977s 00:18:31.118 22:29:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:31.118 ************************************ 00:18:31.118 END TEST nvmf_host_multipath 00:18:31.118 ************************************ 00:18:31.118 22:29:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:31.118 22:29:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:31.118 22:29:44 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:31.118 22:29:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:31.118 22:29:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:31.118 22:29:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:31.118 ************************************ 00:18:31.118 START TEST nvmf_timeout 00:18:31.118 ************************************ 00:18:31.118 22:29:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:31.376 * Looking for test storage... 00:18:31.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:31.376 22:29:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:31.376 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:18:31.376 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.376 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.376 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.376 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.376 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.376 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.376 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.376 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.376 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.376 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.376 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:18:31.376 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:18:31.376 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.376 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.376 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:31.376 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.376 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:31.376 22:29:44 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.376 22:29:44 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.376 22:29:44 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.376 22:29:44 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.376 22:29:44 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:31.377 Cannot find device "nvmf_tgt_br" 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:31.377 Cannot find device "nvmf_tgt_br2" 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:31.377 Cannot find device "nvmf_tgt_br" 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:18:31.377 22:29:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:31.635 Cannot find device "nvmf_tgt_br2" 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:31.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:31.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:31.635 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:31.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:18:31.893 00:18:31.893 --- 10.0.0.2 ping statistics --- 00:18:31.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.893 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:31.893 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:31.893 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:18:31.893 00:18:31.893 --- 10.0.0.3 ping statistics --- 00:18:31.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.893 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:31.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:18:31.893 00:18:31.893 --- 10.0.0.1 ping statistics --- 00:18:31.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.893 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:31.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=81847 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 81847 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 81847 ']' 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.893 22:29:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:31.893 [2024-07-15 22:29:45.426840] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:18:31.894 [2024-07-15 22:29:45.426922] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.152 [2024-07-15 22:29:45.572908] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:32.152 [2024-07-15 22:29:45.659854] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.152 [2024-07-15 22:29:45.659902] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.152 [2024-07-15 22:29:45.659911] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.152 [2024-07-15 22:29:45.659920] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.152 [2024-07-15 22:29:45.659926] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.152 [2024-07-15 22:29:45.660112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.152 [2024-07-15 22:29:45.660278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.152 [2024-07-15 22:29:45.702836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:32.716 22:29:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.716 22:29:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:18:32.716 22:29:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:32.716 22:29:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:32.716 22:29:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:32.716 22:29:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.716 22:29:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:32.716 22:29:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:32.973 [2024-07-15 22:29:46.489013] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.973 22:29:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:33.231 Malloc0 00:18:33.231 22:29:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:33.489 22:29:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:33.489 22:29:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:33.748 [2024-07-15 22:29:47.253744] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.748 22:29:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81896 00:18:33.748 22:29:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:33.748 22:29:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81896 /var/tmp/bdevperf.sock 00:18:33.748 22:29:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 81896 ']' 00:18:33.748 22:29:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:33.748 22:29:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:33.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:33.748 22:29:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:33.748 22:29:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:33.748 22:29:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:33.748 [2024-07-15 22:29:47.319948] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:18:33.748 [2024-07-15 22:29:47.320011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81896 ] 00:18:34.006 [2024-07-15 22:29:47.464172] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.006 [2024-07-15 22:29:47.538507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.006 [2024-07-15 22:29:47.579298] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:34.609 22:29:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:34.609 22:29:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:18:34.609 22:29:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:34.867 22:29:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:35.122 NVMe0n1 00:18:35.122 22:29:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81914 00:18:35.122 22:29:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:35.122 22:29:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:18:35.122 Running I/O for 10 seconds... 00:18:36.051 22:29:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:36.311 [2024-07-15 22:29:49.753357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.311 [2024-07-15 22:29:49.753421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.311 [2024-07-15 22:29:49.753441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.311 [2024-07-15 22:29:49.753451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.311 [2024-07-15 22:29:49.753461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.311 [2024-07-15 22:29:49.753470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.311 [2024-07-15 22:29:49.753481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.311 [2024-07-15 22:29:49.753489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.311 [2024-07-15 22:29:49.753499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.311 [2024-07-15 22:29:49.753508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.311 [2024-07-15 22:29:49.753518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.311 [2024-07-15 22:29:49.753527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.311 [2024-07-15 22:29:49.753537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.311 [2024-07-15 22:29:49.753545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.311 [2024-07-15 22:29:49.753555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.311 [2024-07-15 22:29:49.753564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.311 [2024-07-15 22:29:49.753574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.311 [2024-07-15 22:29:49.753582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.311 [2024-07-15 22:29:49.753592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.311 [2024-07-15 22:29:49.753608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.311 [2024-07-15 22:29:49.753618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.311 [2024-07-15 22:29:49.753627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.311 [2024-07-15 22:29:49.753636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.311 [2024-07-15 22:29:49.753645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.753655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.753663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.753672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.753681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.753691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.753699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.753709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.753718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.753728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.753736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.753748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.753756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.753766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.753774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.753784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.753792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.753802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.753810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.753820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.753829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.753839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.753847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.753857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.753865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.753875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.753884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.753893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.753902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.753912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.753928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.753938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.753947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.753957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.753965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.753975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.753983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.753993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.754001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.754011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.754020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.754029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.754038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.754052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.754060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.754070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.754078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.754088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.754096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.754105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.754114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.754123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.754132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.754141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.312 [2024-07-15 22:29:49.754149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.754159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.312 [2024-07-15 22:29:49.754167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.754177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.312 [2024-07-15 22:29:49.754186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.754195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.312 [2024-07-15 22:29:49.754204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.754214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.312 [2024-07-15 22:29:49.754222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.754232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.312 [2024-07-15 22:29:49.754240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.754250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.312 [2024-07-15 22:29:49.754258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.754268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.312 [2024-07-15 22:29:49.754276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.754286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.312 [2024-07-15 22:29:49.754294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.754304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.312 [2024-07-15 22:29:49.754312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.754321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.312 [2024-07-15 22:29:49.754330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.754341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.312 [2024-07-15 22:29:49.754349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.754359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.312 [2024-07-15 22:29:49.754368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.754377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.312 [2024-07-15 22:29:49.754385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.754395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.312 [2024-07-15 22:29:49.754403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.754413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.312 [2024-07-15 22:29:49.754421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.312 [2024-07-15 22:29:49.754431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.313 [2024-07-15 22:29:49.754439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.313 [2024-07-15 22:29:49.754457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.313 [2024-07-15 22:29:49.754612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.754981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.754991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.755004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.755014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.755022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.755031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.755040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.755050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.755058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.755068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.755076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.755085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.755094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.755103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.755111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.755121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.755129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.755139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.755148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.755157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.755165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.755175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.755183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.755193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.755202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.755212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.313 [2024-07-15 22:29:49.755220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.313 [2024-07-15 22:29:49.755230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.314 [2024-07-15 22:29:49.755774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa748d0 is same with the state(5) to be set 00:18:36.314 [2024-07-15 22:29:49.755794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.314 [2024-07-15 22:29:49.755800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.314 [2024-07-15 22:29:49.755808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80720 len:8 PRP1 0x0 PRP2 0x0 00:18:36.314 [2024-07-15 22:29:49.755816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.314 [2024-07-15 22:29:49.755862] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa748d0 was disconnected and freed. reset controller. 00:18:36.314 [2024-07-15 22:29:49.756072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:36.314 [2024-07-15 22:29:49.756143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa23ee0 (9): Bad file descriptor 00:18:36.314 [2024-07-15 22:29:49.756224] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:36.314 [2024-07-15 22:29:49.756238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa23ee0 with addr=10.0.0.2, port=4420 00:18:36.314 [2024-07-15 22:29:49.756247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23ee0 is same with the state(5) to be set 00:18:36.314 [2024-07-15 22:29:49.756265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa23ee0 (9): Bad file descriptor 00:18:36.314 [2024-07-15 22:29:49.756278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:36.314 [2024-07-15 22:29:49.756286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:36.314 [2024-07-15 22:29:49.756296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:36.314 [2024-07-15 22:29:49.756312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:36.314 [2024-07-15 22:29:49.756320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:36.314 22:29:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:18:38.214 [2024-07-15 22:29:51.753256] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:38.214 [2024-07-15 22:29:51.753307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa23ee0 with addr=10.0.0.2, port=4420 00:18:38.214 [2024-07-15 22:29:51.753320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23ee0 is same with the state(5) to be set 00:18:38.214 [2024-07-15 22:29:51.753341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa23ee0 (9): Bad file descriptor 00:18:38.214 [2024-07-15 22:29:51.753365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:38.214 [2024-07-15 22:29:51.753382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:38.214 [2024-07-15 22:29:51.753392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:38.214 [2024-07-15 22:29:51.753413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:38.214 [2024-07-15 22:29:51.753422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:38.214 22:29:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:18:38.214 22:29:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:38.214 22:29:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:38.473 22:29:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:18:38.473 22:29:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:18:38.473 22:29:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:38.473 22:29:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:38.731 22:29:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:18:38.731 22:29:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:18:40.138 [2024-07-15 22:29:53.750401] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:40.138 [2024-07-15 22:29:53.750467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa23ee0 with addr=10.0.0.2, port=4420 00:18:40.138 [2024-07-15 22:29:53.750481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23ee0 is same with the state(5) to be set 00:18:40.138 [2024-07-15 22:29:53.750504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa23ee0 (9): Bad file descriptor 00:18:40.138 [2024-07-15 22:29:53.750521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:40.138 [2024-07-15 22:29:53.750531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:40.138 [2024-07-15 22:29:53.750543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:40.138 [2024-07-15 22:29:53.750565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:40.138 [2024-07-15 22:29:53.750574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:42.689 [2024-07-15 22:29:55.747429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:42.689 [2024-07-15 22:29:55.747490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:42.689 [2024-07-15 22:29:55.747501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:42.689 [2024-07-15 22:29:55.747511] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:42.689 [2024-07-15 22:29:55.747533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:43.276 00:18:43.276 Latency(us) 00:18:43.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.276 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:43.277 Verification LBA range: start 0x0 length 0x4000 00:18:43.277 NVMe0n1 : 8.10 1235.90 4.83 15.81 0.00 102181.88 3079.40 7061253.96 00:18:43.277 =================================================================================================================== 00:18:43.277 Total : 1235.90 4.83 15.81 0.00 102181.88 3079.40 7061253.96 00:18:43.277 0 00:18:43.841 22:29:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:18:43.841 22:29:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:43.841 22:29:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:43.841 22:29:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:18:43.841 22:29:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:18:43.841 22:29:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:43.841 22:29:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:44.099 22:29:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:18:44.099 22:29:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 81914 00:18:44.099 22:29:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81896 00:18:44.099 22:29:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 81896 ']' 00:18:44.099 22:29:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 81896 00:18:44.099 22:29:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:18:44.099 22:29:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:44.099 22:29:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81896 00:18:44.099 22:29:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:44.099 22:29:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:44.099 killing process with pid 81896 00:18:44.099 22:29:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81896' 00:18:44.099 22:29:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 81896 00:18:44.099 Received shutdown signal, test time was about 8.948711 seconds 00:18:44.099 00:18:44.099 Latency(us) 00:18:44.100 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.100 =================================================================================================================== 00:18:44.100 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:44.100 22:29:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 81896 00:18:44.358 22:29:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:44.358 [2024-07-15 22:29:57.943582] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:44.358 22:29:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82030 00:18:44.358 22:29:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82030 /var/tmp/bdevperf.sock 00:18:44.358 22:29:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82030 ']' 00:18:44.358 22:29:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.358 22:29:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:44.358 22:29:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:44.358 22:29:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.358 22:29:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:44.358 22:29:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:44.616 [2024-07-15 22:29:58.006065] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:18:44.616 [2024-07-15 22:29:58.006130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82030 ] 00:18:44.616 [2024-07-15 22:29:58.146429] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.616 [2024-07-15 22:29:58.229744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.874 [2024-07-15 22:29:58.270492] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:45.440 22:29:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:45.440 22:29:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:18:45.440 22:29:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:45.440 22:29:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:18:45.698 NVMe0n1 00:18:45.955 22:29:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82054 00:18:45.955 22:29:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:45.955 22:29:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:18:45.955 Running I/O for 10 seconds... 00:18:46.890 22:30:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:46.890 [2024-07-15 22:30:00.512350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.890 [2024-07-15 22:30:00.512403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.890 [2024-07-15 22:30:00.512432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.890 [2024-07-15 22:30:00.512452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.890 [2024-07-15 22:30:00.512470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.890 [2024-07-15 22:30:00.512488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.890 [2024-07-15 22:30:00.512507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.890 [2024-07-15 22:30:00.512525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.890 [2024-07-15 22:30:00.512544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.512562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.512580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.512608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.512627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.512645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.512663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.512682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.512700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.512718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.512739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.512757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.512775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.512793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.512812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.512830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.512848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.512867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.512884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.512910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.512929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.512947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.512966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.512984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.512994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.513002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.890 [2024-07-15 22:30:00.513021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.890 [2024-07-15 22:30:00.513040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.890 [2024-07-15 22:30:00.513058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.890 [2024-07-15 22:30:00.513076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.890 [2024-07-15 22:30:00.513094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.890 [2024-07-15 22:30:00.513111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.890 [2024-07-15 22:30:00.513129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.890 [2024-07-15 22:30:00.513147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.513165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.513183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.513200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.513218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.513236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.513254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.513272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.513290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.513308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.513327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.513345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.513362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.513390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.513408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.513426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-07-15 22:30:00.513444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.890 [2024-07-15 22:30:00.513462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.890 [2024-07-15 22:30:00.513497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.890 [2024-07-15 22:30:00.513516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.890 [2024-07-15 22:30:00.513535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.890 [2024-07-15 22:30:00.513545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.890 [2024-07-15 22:30:00.513554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.513564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.891 [2024-07-15 22:30:00.513573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.513583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.891 [2024-07-15 22:30:00.513592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.513602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.891 [2024-07-15 22:30:00.513610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.513628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.513637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.513648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.513657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.513667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.513676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.513686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.513695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.513706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.513714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.513724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.513733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.513743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.513752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.513762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.513771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.513781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.513789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.513800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.513808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.513831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.513839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.513849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.513857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.513867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.513875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.513885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.513893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.513903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.513911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.513920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.513929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.513938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.513948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.513958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.513966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.513976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.513984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.513994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.514002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.514021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.514038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.514056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.514074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.891 [2024-07-15 22:30:00.514092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.891 [2024-07-15 22:30:00.514110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.891 [2024-07-15 22:30:00.514128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.891 [2024-07-15 22:30:00.514145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.891 [2024-07-15 22:30:00.514164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.891 [2024-07-15 22:30:00.514182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.891 [2024-07-15 22:30:00.514199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.891 [2024-07-15 22:30:00.514217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.891 [2024-07-15 22:30:00.514235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.891 [2024-07-15 22:30:00.514253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.891 [2024-07-15 22:30:00.514271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.891 [2024-07-15 22:30:00.514289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.891 [2024-07-15 22:30:00.514311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.891 [2024-07-15 22:30:00.514329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.891 [2024-07-15 22:30:00.514347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.891 [2024-07-15 22:30:00.514365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.514383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:103760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.514401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:103768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.514419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.514437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.514455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.514472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:103800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.514490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.514508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.514526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.514544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.514562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.514581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.514601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.514626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-07-15 22:30:00.514643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9648d0 is same with the state(5) to be set 00:18:46.891 [2024-07-15 22:30:00.514663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.891 [2024-07-15 22:30:00.514670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.891 [2024-07-15 22:30:00.514678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103872 len:8 PRP1 0x0 PRP2 0x0 00:18:46.891 [2024-07-15 22:30:00.514686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.891 [2024-07-15 22:30:00.514702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.891 [2024-07-15 22:30:00.514710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104200 len:8 PRP1 0x0 PRP2 0x0 00:18:46.891 [2024-07-15 22:30:00.514718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.891 [2024-07-15 22:30:00.514733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.891 [2024-07-15 22:30:00.514740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104208 len:8 PRP1 0x0 PRP2 0x0 00:18:46.891 [2024-07-15 22:30:00.514748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.891 [2024-07-15 22:30:00.514763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.891 [2024-07-15 22:30:00.514770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104216 len:8 PRP1 0x0 PRP2 0x0 00:18:46.891 [2024-07-15 22:30:00.514778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.891 [2024-07-15 22:30:00.514793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.891 [2024-07-15 22:30:00.514800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104224 len:8 PRP1 0x0 PRP2 0x0 00:18:46.891 [2024-07-15 22:30:00.514808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.891 [2024-07-15 22:30:00.514823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.891 [2024-07-15 22:30:00.514830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104232 len:8 PRP1 0x0 PRP2 0x0 00:18:46.891 [2024-07-15 22:30:00.514838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.891 [2024-07-15 22:30:00.514853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.891 [2024-07-15 22:30:00.514862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104240 len:8 PRP1 0x0 PRP2 0x0 00:18:46.891 [2024-07-15 22:30:00.514871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.891 [2024-07-15 22:30:00.514879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.892 [2024-07-15 22:30:00.514885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.892 [2024-07-15 22:30:00.514893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104248 len:8 PRP1 0x0 PRP2 0x0 00:18:46.892 [2024-07-15 22:30:00.514901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.892 [2024-07-15 22:30:00.514909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.892 [2024-07-15 22:30:00.514916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.892 [2024-07-15 22:30:00.514923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104256 len:8 PRP1 0x0 PRP2 0x0 00:18:46.892 [2024-07-15 22:30:00.514930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.892 [2024-07-15 22:30:00.514974] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9648d0 was disconnected and freed. reset controller. 00:18:46.892 [2024-07-15 22:30:00.515168] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:46.892 [2024-07-15 22:30:00.515227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x913ee0 (9): Bad file descriptor 00:18:46.892 [2024-07-15 22:30:00.515301] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:46.892 [2024-07-15 22:30:00.515314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x913ee0 with addr=10.0.0.2, port=4420 00:18:46.892 [2024-07-15 22:30:00.515323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913ee0 is same with the state(5) to be set 00:18:46.892 [2024-07-15 22:30:00.515336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x913ee0 (9): Bad file descriptor 00:18:46.892 [2024-07-15 22:30:00.515349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:46.892 [2024-07-15 22:30:00.515357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:46.892 [2024-07-15 22:30:00.515367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:46.892 [2024-07-15 22:30:00.515383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:46.892 [2024-07-15 22:30:00.515391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:47.150 22:30:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:18:48.085 [2024-07-15 22:30:01.513867] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:48.085 [2024-07-15 22:30:01.513922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x913ee0 with addr=10.0.0.2, port=4420 00:18:48.085 [2024-07-15 22:30:01.513935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913ee0 is same with the state(5) to be set 00:18:48.085 [2024-07-15 22:30:01.513953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x913ee0 (9): Bad file descriptor 00:18:48.085 [2024-07-15 22:30:01.513967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:48.085 [2024-07-15 22:30:01.513976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:48.085 [2024-07-15 22:30:01.513986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:48.085 [2024-07-15 22:30:01.514005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:48.085 [2024-07-15 22:30:01.514014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:48.085 22:30:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:48.085 [2024-07-15 22:30:01.700209] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.343 22:30:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 82054 00:18:48.928 [2024-07-15 22:30:02.531319] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:57.092 00:18:57.092 Latency(us) 00:18:57.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.092 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:57.092 Verification LBA range: start 0x0 length 0x4000 00:18:57.092 NVMe0n1 : 10.01 8519.27 33.28 0.00 0.00 15000.18 1026.47 3018551.31 00:18:57.092 =================================================================================================================== 00:18:57.092 Total : 8519.27 33.28 0.00 0.00 15000.18 1026.47 3018551.31 00:18:57.092 0 00:18:57.092 22:30:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82158 00:18:57.092 22:30:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:57.092 22:30:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:18:57.092 Running I/O for 10 seconds... 00:18:57.092 22:30:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:57.092 [2024-07-15 22:30:10.619310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.092 [2024-07-15 22:30:10.619355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.092 [2024-07-15 22:30:10.619375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.092 [2024-07-15 22:30:10.619385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.092 [2024-07-15 22:30:10.619395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.092 [2024-07-15 22:30:10.619404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.092 [2024-07-15 22:30:10.619415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.092 [2024-07-15 22:30:10.619423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.092 [2024-07-15 22:30:10.619433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.092 [2024-07-15 22:30:10.619442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.092 [2024-07-15 22:30:10.619451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.092 [2024-07-15 22:30:10.619460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.092 [2024-07-15 22:30:10.619470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.092 [2024-07-15 22:30:10.619478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.092 [2024-07-15 22:30:10.619488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.092 [2024-07-15 22:30:10.619496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.092 [2024-07-15 22:30:10.619506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.092 [2024-07-15 22:30:10.619514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.092 [2024-07-15 22:30:10.619524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.092 [2024-07-15 22:30:10.619532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.092 [2024-07-15 22:30:10.619542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.092 [2024-07-15 22:30:10.619551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.092 [2024-07-15 22:30:10.619560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.092 [2024-07-15 22:30:10.619569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.092 [2024-07-15 22:30:10.619578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.092 [2024-07-15 22:30:10.619587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.092 [2024-07-15 22:30:10.619606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.092 [2024-07-15 22:30:10.619615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.092 [2024-07-15 22:30:10.619625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.092 [2024-07-15 22:30:10.619633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.092 [2024-07-15 22:30:10.619643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.092 [2024-07-15 22:30:10.619651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.092 [2024-07-15 22:30:10.619661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.092 [2024-07-15 22:30:10.619686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.092 [2024-07-15 22:30:10.619698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.092 [2024-07-15 22:30:10.619708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.092 [2024-07-15 22:30:10.619718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.092 [2024-07-15 22:30:10.619734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.092 [2024-07-15 22:30:10.619748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.092 [2024-07-15 22:30:10.619757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.092 [2024-07-15 22:30:10.619767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.092 [2024-07-15 22:30:10.619777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.092 [2024-07-15 22:30:10.619787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.092 [2024-07-15 22:30:10.619796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.092 [2024-07-15 22:30:10.619806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.092 [2024-07-15 22:30:10.619815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.092 [2024-07-15 22:30:10.619825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.092 [2024-07-15 22:30:10.619834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.619845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.619853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.619863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.619872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.619882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.619891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.619901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.619910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.619920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.619928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.619939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.619947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.619957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.619966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.619976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.619984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.619994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.620003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.620023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.620042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.620061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.620081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.620100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.620130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.620148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-15 22:30:10.620166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-15 22:30:10.620184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-15 22:30:10.620202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-15 22:30:10.620220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-15 22:30:10.620238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-15 22:30:10.620256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-15 22:30:10.620273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.093 [2024-07-15 22:30:10.620291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.620309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.620328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.620346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.620363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.620381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.620399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.620417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.620435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.620453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.620471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.620489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.620506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.620524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.620542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.620559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.093 [2024-07-15 22:30:10.620578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.093 [2024-07-15 22:30:10.620587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.094 [2024-07-15 22:30:10.620595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.620605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.094 [2024-07-15 22:30:10.620614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.620630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.094 [2024-07-15 22:30:10.620638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.620648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.094 [2024-07-15 22:30:10.620656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.620666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.094 [2024-07-15 22:30:10.620675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.620685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.094 [2024-07-15 22:30:10.620694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.620703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.094 [2024-07-15 22:30:10.620712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.620722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.094 [2024-07-15 22:30:10.620730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.620740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.094 [2024-07-15 22:30:10.620748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.620758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.094 [2024-07-15 22:30:10.620766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.620776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.094 [2024-07-15 22:30:10.620784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.620794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.094 [2024-07-15 22:30:10.620802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.620812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.094 [2024-07-15 22:30:10.620820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.620830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.094 [2024-07-15 22:30:10.620838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.620848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.094 [2024-07-15 22:30:10.620856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.620866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.094 [2024-07-15 22:30:10.620874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.620883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.094 [2024-07-15 22:30:10.620892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.620902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.094 [2024-07-15 22:30:10.620910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.620920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.094 [2024-07-15 22:30:10.620928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.620938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.094 [2024-07-15 22:30:10.620946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.620956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.094 [2024-07-15 22:30:10.620967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.620978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.094 [2024-07-15 22:30:10.620986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.620996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.094 [2024-07-15 22:30:10.621004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.621014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.094 [2024-07-15 22:30:10.621022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.621031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.094 [2024-07-15 22:30:10.621039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.621049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.094 [2024-07-15 22:30:10.621058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.621067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.094 [2024-07-15 22:30:10.621075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.621085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.094 [2024-07-15 22:30:10.621093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.621103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.094 [2024-07-15 22:30:10.621111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.621120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.094 [2024-07-15 22:30:10.621129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.621138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.094 [2024-07-15 22:30:10.621146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.621156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.094 [2024-07-15 22:30:10.621164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.621174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.094 [2024-07-15 22:30:10.621182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.621192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.094 [2024-07-15 22:30:10.621200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.621210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.094 [2024-07-15 22:30:10.621218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.621228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.094 [2024-07-15 22:30:10.621237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.621251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.094 [2024-07-15 22:30:10.621260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.621270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.094 [2024-07-15 22:30:10.621278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.621288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.094 [2024-07-15 22:30:10.621297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.621306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.094 [2024-07-15 22:30:10.621315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.621325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.094 [2024-07-15 22:30:10.621333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.621342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x994380 is same with the state(5) to be set 00:18:57.094 [2024-07-15 22:30:10.621353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:57.094 [2024-07-15 22:30:10.621360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:57.094 [2024-07-15 22:30:10.621367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98104 len:8 PRP1 0x0 PRP2 0x0 00:18:57.094 [2024-07-15 22:30:10.621382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.094 [2024-07-15 22:30:10.621392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:57.095 [2024-07-15 22:30:10.621398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:57.095 [2024-07-15 22:30:10.621405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98704 len:8 PRP1 0x0 PRP2 0x0 00:18:57.095 [2024-07-15 22:30:10.621430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.095 [2024-07-15 22:30:10.621439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:57.095 [2024-07-15 22:30:10.621446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:57.095 [2024-07-15 22:30:10.621454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98712 len:8 PRP1 0x0 PRP2 0x0 00:18:57.095 [2024-07-15 22:30:10.621462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.095 [2024-07-15 22:30:10.621471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:57.095 [2024-07-15 22:30:10.621478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:57.095 [2024-07-15 22:30:10.621486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98720 len:8 PRP1 0x0 PRP2 0x0 00:18:57.095 [2024-07-15 22:30:10.621494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.095 [2024-07-15 22:30:10.621503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:57.095 [2024-07-15 22:30:10.621510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:57.095 [2024-07-15 22:30:10.621518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98728 len:8 PRP1 0x0 PRP2 0x0 00:18:57.095 [2024-07-15 22:30:10.621526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.095 [2024-07-15 22:30:10.621535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:57.095 [2024-07-15 22:30:10.621543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:57.095 [2024-07-15 22:30:10.621552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98736 len:8 PRP1 0x0 PRP2 0x0 00:18:57.095 [2024-07-15 22:30:10.621560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.095 [2024-07-15 22:30:10.621569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:57.095 [2024-07-15 22:30:10.621576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:57.095 [2024-07-15 22:30:10.621584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98744 len:8 PRP1 0x0 PRP2 0x0 00:18:57.095 [2024-07-15 22:30:10.621592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.095 [2024-07-15 22:30:10.621601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:57.095 [2024-07-15 22:30:10.621608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:57.095 [2024-07-15 22:30:10.621622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98752 len:8 PRP1 0x0 PRP2 0x0 00:18:57.095 [2024-07-15 22:30:10.621631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.095 [2024-07-15 22:30:10.621640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:57.095 [2024-07-15 22:30:10.621647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:57.095 [2024-07-15 22:30:10.621654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98760 len:8 PRP1 0x0 PRP2 0x0 00:18:57.095 [2024-07-15 22:30:10.621663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.095 [2024-07-15 22:30:10.621672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:57.095 [2024-07-15 22:30:10.621679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:57.095 [2024-07-15 22:30:10.621686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98768 len:8 PRP1 0x0 PRP2 0x0 00:18:57.095 [2024-07-15 22:30:10.621695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.095 [2024-07-15 22:30:10.621703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:57.095 [2024-07-15 22:30:10.621710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:57.095 [2024-07-15 22:30:10.621718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98776 len:8 PRP1 0x0 PRP2 0x0 00:18:57.095 [2024-07-15 22:30:10.621726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.095 [2024-07-15 22:30:10.621735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:57.095 [2024-07-15 22:30:10.621743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:57.095 [2024-07-15 22:30:10.621750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98784 len:8 PRP1 0x0 PRP2 0x0 00:18:57.095 [2024-07-15 22:30:10.621758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.095 [2024-07-15 22:30:10.621767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:57.095 [2024-07-15 22:30:10.621774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:57.095 [2024-07-15 22:30:10.621781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98792 len:8 PRP1 0x0 PRP2 0x0 00:18:57.095 [2024-07-15 22:30:10.621790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.095 [2024-07-15 22:30:10.621798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:57.095 [2024-07-15 22:30:10.621806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:57.095 [2024-07-15 22:30:10.621815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98800 len:8 PRP1 0x0 PRP2 0x0 00:18:57.095 [2024-07-15 22:30:10.621824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.095 [2024-07-15 22:30:10.621833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:57.095 [2024-07-15 22:30:10.621840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:57.095 [2024-07-15 22:30:10.621847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98808 len:8 PRP1 0x0 PRP2 0x0 00:18:57.095 [2024-07-15 22:30:10.621856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.095 [2024-07-15 22:30:10.621865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:57.095 [2024-07-15 22:30:10.621872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:57.095 [2024-07-15 22:30:10.621879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98112 len:8 PRP1 0x0 PRP2 0x0 00:18:57.095 [2024-07-15 22:30:10.621888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.095 [2024-07-15 22:30:10.621896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:57.095 [2024-07-15 22:30:10.621903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:57.095 [2024-07-15 22:30:10.621911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98120 len:8 PRP1 0x0 PRP2 0x0 00:18:57.095 [2024-07-15 22:30:10.621919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.095 [2024-07-15 22:30:10.621929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:57.095 [2024-07-15 22:30:10.621935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:57.095 [2024-07-15 22:30:10.621943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98128 len:8 PRP1 0x0 PRP2 0x0 00:18:57.095 [2024-07-15 22:30:10.621952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.095 [2024-07-15 22:30:10.621960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:57.095 [2024-07-15 22:30:10.621967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:57.095 [2024-07-15 22:30:10.621974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98136 len:8 PRP1 0x0 PRP2 0x0 00:18:57.095 [2024-07-15 22:30:10.621984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.095 [2024-07-15 22:30:10.621993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:57.095 [2024-07-15 22:30:10.622000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:57.095 [2024-07-15 22:30:10.622007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98144 len:8 PRP1 0x0 PRP2 0x0 00:18:57.095 [2024-07-15 22:30:10.622016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.095 [2024-07-15 22:30:10.622024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:57.095 [2024-07-15 22:30:10.622031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:57.095 [2024-07-15 22:30:10.622039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98152 len:8 PRP1 0x0 PRP2 0x0 00:18:57.095 [2024-07-15 22:30:10.622047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.095 [2024-07-15 22:30:10.622056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:57.095 [2024-07-15 22:30:10.622064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:57.095 [2024-07-15 22:30:10.622074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98160 len:8 PRP1 0x0 PRP2 0x0 00:18:57.095 [2024-07-15 22:30:10.622083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.095 [2024-07-15 22:30:10.622091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:57.095 22:30:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:18:57.095 [2024-07-15 22:30:10.637538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:57.095 [2024-07-15 22:30:10.637563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98168 len:8 PRP1 0x0 PRP2 0x0 00:18:57.095 [2024-07-15 22:30:10.637575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.095 [2024-07-15 22:30:10.637648] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x994380 was disconnected and freed. reset controller. 00:18:57.095 [2024-07-15 22:30:10.637740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:57.095 [2024-07-15 22:30:10.637755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.095 [2024-07-15 22:30:10.637768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:57.095 [2024-07-15 22:30:10.637779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.095 [2024-07-15 22:30:10.637792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:57.095 [2024-07-15 22:30:10.637803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.095 [2024-07-15 22:30:10.637814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:57.095 [2024-07-15 22:30:10.637826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.095 [2024-07-15 22:30:10.637837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913ee0 is same with the state(5) to be set 00:18:57.095 [2024-07-15 22:30:10.638060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:57.096 [2024-07-15 22:30:10.638079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x913ee0 (9): Bad file descriptor 00:18:57.096 [2024-07-15 22:30:10.638166] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:57.096 [2024-07-15 22:30:10.638184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x913ee0 with addr=10.0.0.2, port=4420 00:18:57.096 [2024-07-15 22:30:10.638195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913ee0 is same with the state(5) to be set 00:18:57.096 [2024-07-15 22:30:10.638213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x913ee0 (9): Bad file descriptor 00:18:57.096 [2024-07-15 22:30:10.638230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:57.096 [2024-07-15 22:30:10.638240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:57.096 [2024-07-15 22:30:10.638253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:57.096 [2024-07-15 22:30:10.638271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:57.096 [2024-07-15 22:30:10.638282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:58.032 [2024-07-15 22:30:11.636755] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:58.032 [2024-07-15 22:30:11.636801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x913ee0 with addr=10.0.0.2, port=4420 00:18:58.032 [2024-07-15 22:30:11.636812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913ee0 is same with the state(5) to be set 00:18:58.032 [2024-07-15 22:30:11.636830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x913ee0 (9): Bad file descriptor 00:18:58.032 [2024-07-15 22:30:11.636844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:58.032 [2024-07-15 22:30:11.636853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:58.032 [2024-07-15 22:30:11.636863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:58.032 [2024-07-15 22:30:11.636882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:58.032 [2024-07-15 22:30:11.636891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:59.410 [2024-07-15 22:30:12.635357] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:59.410 [2024-07-15 22:30:12.635398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x913ee0 with addr=10.0.0.2, port=4420 00:18:59.410 [2024-07-15 22:30:12.635409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913ee0 is same with the state(5) to be set 00:18:59.410 [2024-07-15 22:30:12.635426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x913ee0 (9): Bad file descriptor 00:18:59.410 [2024-07-15 22:30:12.635439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:59.410 [2024-07-15 22:30:12.635448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:59.410 [2024-07-15 22:30:12.635457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:59.410 [2024-07-15 22:30:12.635474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:59.410 [2024-07-15 22:30:12.635483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:00.357 [2024-07-15 22:30:13.636585] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:00.357 [2024-07-15 22:30:13.636649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x913ee0 with addr=10.0.0.2, port=4420 00:19:00.357 [2024-07-15 22:30:13.636661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913ee0 is same with the state(5) to be set 00:19:00.357 [2024-07-15 22:30:13.636842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x913ee0 (9): Bad file descriptor 00:19:00.357 [2024-07-15 22:30:13.637015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:00.357 [2024-07-15 22:30:13.637024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:00.357 [2024-07-15 22:30:13.637033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:00.357 [2024-07-15 22:30:13.639818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:00.357 [2024-07-15 22:30:13.639845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:00.358 22:30:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:00.358 [2024-07-15 22:30:13.883539] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.358 22:30:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 82158 00:19:01.338 [2024-07-15 22:30:14.668718] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:06.608 00:19:06.608 Latency(us) 00:19:06.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.608 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:06.608 Verification LBA range: start 0x0 length 0x4000 00:19:06.608 NVMe0n1 : 10.01 7243.14 28.29 5202.56 0.00 10265.03 460.59 3032026.99 00:19:06.608 =================================================================================================================== 00:19:06.608 Total : 7243.14 28.29 5202.56 0.00 10265.03 0.00 3032026.99 00:19:06.608 0 00:19:06.608 22:30:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82030 00:19:06.608 22:30:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82030 ']' 00:19:06.608 22:30:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82030 00:19:06.608 22:30:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:06.608 22:30:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:06.608 22:30:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82030 00:19:06.608 killing process with pid 82030 00:19:06.608 Received shutdown signal, test time was about 10.000000 seconds 00:19:06.608 00:19:06.608 Latency(us) 00:19:06.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.608 =================================================================================================================== 00:19:06.608 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:06.608 22:30:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:06.608 22:30:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:06.608 22:30:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82030' 00:19:06.608 22:30:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82030 00:19:06.608 22:30:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82030 00:19:06.608 22:30:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82272 00:19:06.608 22:30:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82272 /var/tmp/bdevperf.sock 00:19:06.608 22:30:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:06.608 22:30:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82272 ']' 00:19:06.608 22:30:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:06.608 22:30:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:06.608 22:30:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:06.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:06.608 22:30:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:06.608 22:30:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:06.608 [2024-07-15 22:30:19.832983] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:19:06.608 [2024-07-15 22:30:19.833079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82272 ] 00:19:06.608 [2024-07-15 22:30:19.982559] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.608 [2024-07-15 22:30:20.077048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.608 [2024-07-15 22:30:20.117984] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:07.174 22:30:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:07.174 22:30:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:07.174 22:30:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82272 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:07.174 22:30:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82287 00:19:07.174 22:30:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:07.434 22:30:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:07.692 NVMe0n1 00:19:07.692 22:30:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82330 00:19:07.692 22:30:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:07.692 22:30:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:19:07.949 Running I/O for 10 seconds... 00:19:08.912 22:30:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:08.912 [2024-07-15 22:30:22.427225] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427279] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427298] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427307] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427315] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427323] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427330] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427339] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427347] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427355] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427363] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427371] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427379] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427395] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427403] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427411] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427419] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427427] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427435] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427443] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427451] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427459] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427467] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427475] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427483] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427491] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427505] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427513] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427522] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427530] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427538] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427546] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427555] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427563] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427572] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427580] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427588] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427613] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427622] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427630] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427638] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427653] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427661] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427677] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427693] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427708] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427717] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427724] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427732] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427741] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427754] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427762] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427785] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427792] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427800] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427808] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427816] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427824] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427834] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427858] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427866] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427873] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427882] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427897] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427905] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427920] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427928] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427936] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427944] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427951] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427959] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427967] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427975] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427982] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427990] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.427998] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428006] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428013] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428021] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428029] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428037] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428045] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428052] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428060] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428077] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428085] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428093] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428101] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428117] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428124] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428132] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428140] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428148] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428156] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428163] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428171] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428195] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428202] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428210] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428218] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428226] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428233] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428241] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.912 [2024-07-15 22:30:22.428256] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.913 [2024-07-15 22:30:22.428264] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.913 [2024-07-15 22:30:22.428272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.913 [2024-07-15 22:30:22.428280] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1b80 is same with the state(5) to be set 00:19:08.913 [2024-07-15 22:30:22.428332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:129768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:53616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:66992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:108072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:69032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:127224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:66960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:54432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:54656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.428984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.428994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:111832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:52848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:58488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:33536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:89288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:40152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:116784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.913 [2024-07-15 22:30:22.429578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.913 [2024-07-15 22:30:22.429586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.429602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.429612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.429621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.429630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.429640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.429648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.429658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.429666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.429676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.429684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.429694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.429702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.429712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.429720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.429729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.429738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.429747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.429755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.429765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.429773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.429783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.429791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.429800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:129224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.429808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.429818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.429826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.429836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.429844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.429854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.429862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.429872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.429881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.429890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.429901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.429915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.429923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.429933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.429943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.429953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.429961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.429971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.429979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.429988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.429996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:116640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:43208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:68632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.914 [2024-07-15 22:30:22.430664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.914 [2024-07-15 22:30:22.430672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.915 [2024-07-15 22:30:22.430682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.915 [2024-07-15 22:30:22.430690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.915 [2024-07-15 22:30:22.430699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a570 is same with the state(5) to be set 00:19:08.915 [2024-07-15 22:30:22.430710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:08.915 [2024-07-15 22:30:22.430717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:08.915 [2024-07-15 22:30:22.430724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15296 len:8 PRP1 0x0 PRP2 0x0 00:19:08.915 [2024-07-15 22:30:22.430733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.915 [2024-07-15 22:30:22.430778] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x157a570 was disconnected and freed. reset controller. 00:19:08.915 [2024-07-15 22:30:22.431004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:08.915 [2024-07-15 22:30:22.431071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1529da0 (9): Bad file descriptor 00:19:08.915 [2024-07-15 22:30:22.431158] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:08.915 [2024-07-15 22:30:22.431172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1529da0 with addr=10.0.0.2, port=4420 00:19:08.915 [2024-07-15 22:30:22.431183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529da0 is same with the state(5) to be set 00:19:08.915 [2024-07-15 22:30:22.431197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1529da0 (9): Bad file descriptor 00:19:08.915 [2024-07-15 22:30:22.431209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:08.915 [2024-07-15 22:30:22.431218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:08.915 [2024-07-15 22:30:22.431228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:08.915 [2024-07-15 22:30:22.431244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:08.915 [2024-07-15 22:30:22.431254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:08.915 22:30:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 82330 00:19:10.811 [2024-07-15 22:30:24.428202] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:10.811 [2024-07-15 22:30:24.428262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1529da0 with addr=10.0.0.2, port=4420 00:19:10.811 [2024-07-15 22:30:24.428277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529da0 is same with the state(5) to be set 00:19:10.811 [2024-07-15 22:30:24.428299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1529da0 (9): Bad file descriptor 00:19:10.811 [2024-07-15 22:30:24.428325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:10.811 [2024-07-15 22:30:24.428334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:10.811 [2024-07-15 22:30:24.428344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:10.811 [2024-07-15 22:30:24.428365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:10.811 [2024-07-15 22:30:24.428374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:13.397 [2024-07-15 22:30:26.425295] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:13.397 [2024-07-15 22:30:26.425352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1529da0 with addr=10.0.0.2, port=4420 00:19:13.397 [2024-07-15 22:30:26.425367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529da0 is same with the state(5) to be set 00:19:13.397 [2024-07-15 22:30:26.425396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1529da0 (9): Bad file descriptor 00:19:13.397 [2024-07-15 22:30:26.425412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:13.397 [2024-07-15 22:30:26.425421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:13.397 [2024-07-15 22:30:26.425431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:13.398 [2024-07-15 22:30:26.425453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:13.398 [2024-07-15 22:30:26.425462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:15.301 [2024-07-15 22:30:28.422295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:15.301 [2024-07-15 22:30:28.422346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:15.301 [2024-07-15 22:30:28.422357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:15.301 [2024-07-15 22:30:28.422366] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:15.301 [2024-07-15 22:30:28.422385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:15.868 00:19:15.868 Latency(us) 00:19:15.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.868 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:15.868 NVMe0n1 : 8.10 2649.74 10.35 15.80 0.00 48149.64 6027.21 7061253.96 00:19:15.868 =================================================================================================================== 00:19:15.868 Total : 2649.74 10.35 15.80 0.00 48149.64 6027.21 7061253.96 00:19:15.868 0 00:19:15.868 22:30:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:15.868 Attaching 5 probes... 00:19:15.868 1083.210494: reset bdev controller NVMe0 00:19:15.868 1083.323059: reconnect bdev controller NVMe0 00:19:15.868 3080.300871: reconnect delay bdev controller NVMe0 00:19:15.868 3080.324845: reconnect bdev controller NVMe0 00:19:15.868 5077.402030: reconnect delay bdev controller NVMe0 00:19:15.868 5077.418973: reconnect bdev controller NVMe0 00:19:15.868 7074.496256: reconnect delay bdev controller NVMe0 00:19:15.868 7074.512813: reconnect bdev controller NVMe0 00:19:15.868 22:30:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:15.868 22:30:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:15.868 22:30:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 82287 00:19:15.868 22:30:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:15.868 22:30:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82272 00:19:15.868 22:30:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82272 ']' 00:19:15.868 22:30:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82272 00:19:15.868 22:30:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:15.868 22:30:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:15.868 22:30:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82272 00:19:16.126 killing process with pid 82272 00:19:16.126 Received shutdown signal, test time was about 8.184967 seconds 00:19:16.126 00:19:16.126 Latency(us) 00:19:16.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.126 =================================================================================================================== 00:19:16.126 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:16.126 22:30:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:16.126 22:30:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:16.126 22:30:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82272' 00:19:16.126 22:30:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82272 00:19:16.126 22:30:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82272 00:19:16.126 22:30:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:16.384 22:30:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:16.384 22:30:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:19:16.384 22:30:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:16.384 22:30:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:19:16.384 22:30:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:16.384 22:30:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:19:16.384 22:30:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:16.384 22:30:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:16.384 rmmod nvme_tcp 00:19:16.384 rmmod nvme_fabrics 00:19:16.384 rmmod nvme_keyring 00:19:16.384 22:30:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:16.384 22:30:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:19:16.384 22:30:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:19:16.384 22:30:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 81847 ']' 00:19:16.384 22:30:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 81847 00:19:16.384 22:30:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 81847 ']' 00:19:16.384 22:30:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 81847 00:19:16.384 22:30:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:16.384 22:30:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:16.384 22:30:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81847 00:19:16.642 22:30:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:16.642 22:30:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:16.642 killing process with pid 81847 00:19:16.642 22:30:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81847' 00:19:16.642 22:30:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 81847 00:19:16.642 22:30:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 81847 00:19:16.642 22:30:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:16.642 22:30:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:16.642 22:30:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:16.642 22:30:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:16.642 22:30:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:16.643 22:30:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.643 22:30:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.643 22:30:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.643 22:30:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:16.944 00:19:16.944 real 0m45.586s 00:19:16.944 user 2m11.465s 00:19:16.944 sys 0m6.640s 00:19:16.944 22:30:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:16.944 ************************************ 00:19:16.944 END TEST nvmf_timeout 00:19:16.944 ************************************ 00:19:16.944 22:30:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:16.944 22:30:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:16.944 22:30:30 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:19:16.944 22:30:30 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:19:16.944 22:30:30 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:16.944 22:30:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:16.944 22:30:30 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:19:16.944 00:19:16.944 real 11m22.252s 00:19:16.944 user 26m48.322s 00:19:16.944 sys 3m24.649s 00:19:16.944 22:30:30 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:16.944 22:30:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:16.944 ************************************ 00:19:16.944 END TEST nvmf_tcp 00:19:16.944 ************************************ 00:19:16.944 22:30:30 -- common/autotest_common.sh@1142 -- # return 0 00:19:16.944 22:30:30 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:19:16.944 22:30:30 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:16.944 22:30:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:16.944 22:30:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:16.944 22:30:30 -- common/autotest_common.sh@10 -- # set +x 00:19:16.944 ************************************ 00:19:16.944 START TEST nvmf_dif 00:19:16.944 ************************************ 00:19:16.944 22:30:30 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:17.205 * Looking for test storage... 00:19:17.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:17.205 22:30:30 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:17.205 22:30:30 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.205 22:30:30 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.205 22:30:30 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.205 22:30:30 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.205 22:30:30 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.205 22:30:30 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.205 22:30:30 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:19:17.205 22:30:30 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:17.205 22:30:30 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:17.206 22:30:30 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:19:17.206 22:30:30 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:17.206 22:30:30 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:17.206 22:30:30 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:19:17.206 22:30:30 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.206 22:30:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:17.206 22:30:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:17.206 Cannot find device "nvmf_tgt_br" 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@155 -- # true 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:17.206 Cannot find device "nvmf_tgt_br2" 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@156 -- # true 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:17.206 Cannot find device "nvmf_tgt_br" 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@158 -- # true 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:17.206 Cannot find device "nvmf_tgt_br2" 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@159 -- # true 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:17.206 22:30:30 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:17.465 22:30:30 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:17.465 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:17.465 22:30:30 nvmf_dif -- nvmf/common.sh@162 -- # true 00:19:17.465 22:30:30 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:17.465 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:17.465 22:30:30 nvmf_dif -- nvmf/common.sh@163 -- # true 00:19:17.465 22:30:30 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:17.465 22:30:30 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:17.465 22:30:30 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:17.465 22:30:30 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:17.465 22:30:30 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:17.465 22:30:30 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:17.465 22:30:30 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:17.465 22:30:30 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:17.465 22:30:30 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:17.465 22:30:30 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:17.465 22:30:30 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:17.465 22:30:30 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:17.465 22:30:30 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:17.465 22:30:31 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:17.465 22:30:31 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:17.465 22:30:31 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:17.465 22:30:31 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:17.465 22:30:31 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:17.465 22:30:31 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:17.465 22:30:31 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:17.465 22:30:31 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:17.465 22:30:31 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:17.723 22:30:31 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:17.723 22:30:31 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:17.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:17.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:19:17.723 00:19:17.723 --- 10.0.0.2 ping statistics --- 00:19:17.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.723 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:19:17.723 22:30:31 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:17.723 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:17.723 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:19:17.723 00:19:17.723 --- 10.0.0.3 ping statistics --- 00:19:17.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.723 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:19:17.723 22:30:31 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:17.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:17.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:19:17.723 00:19:17.723 --- 10.0.0.1 ping statistics --- 00:19:17.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.723 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:19:17.723 22:30:31 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:17.723 22:30:31 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:19:17.723 22:30:31 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:19:17.723 22:30:31 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:18.289 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:18.289 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:18.289 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:18.289 22:30:31 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:18.289 22:30:31 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:18.289 22:30:31 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:18.289 22:30:31 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:18.289 22:30:31 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:18.289 22:30:31 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:18.289 22:30:31 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:18.289 22:30:31 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:19:18.289 22:30:31 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:18.289 22:30:31 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:18.289 22:30:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:18.289 22:30:31 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=82776 00:19:18.289 22:30:31 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 82776 00:19:18.289 22:30:31 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 82776 ']' 00:19:18.289 22:30:31 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:18.289 22:30:31 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.289 22:30:31 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:18.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.289 22:30:31 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.289 22:30:31 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:18.289 22:30:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:18.289 [2024-07-15 22:30:31.826484] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:19:18.289 [2024-07-15 22:30:31.826557] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.548 [2024-07-15 22:30:31.968378] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.548 [2024-07-15 22:30:32.114510] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.548 [2024-07-15 22:30:32.114571] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.548 [2024-07-15 22:30:32.114580] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.548 [2024-07-15 22:30:32.114589] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.548 [2024-07-15 22:30:32.114604] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.548 [2024-07-15 22:30:32.114645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.548 [2024-07-15 22:30:32.167750] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:19.113 22:30:32 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:19.113 22:30:32 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:19:19.113 22:30:32 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:19.113 22:30:32 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:19.113 22:30:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:19.372 22:30:32 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.372 22:30:32 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:19:19.372 22:30:32 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:19.372 22:30:32 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.372 22:30:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:19.372 [2024-07-15 22:30:32.757003] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:19.372 22:30:32 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.372 22:30:32 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:19.372 22:30:32 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:19.372 22:30:32 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:19.372 22:30:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:19.372 ************************************ 00:19:19.372 START TEST fio_dif_1_default 00:19:19.372 ************************************ 00:19:19.372 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:19:19.372 22:30:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:19:19.372 22:30:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:19:19.372 22:30:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:19:19.372 22:30:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:19:19.372 22:30:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:19:19.372 22:30:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:19.372 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.372 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:19.372 bdev_null0 00:19:19.372 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.372 22:30:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:19.372 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.372 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:19.372 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:19.373 [2024-07-15 22:30:32.816990] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:19.373 { 00:19:19.373 "params": { 00:19:19.373 "name": "Nvme$subsystem", 00:19:19.373 "trtype": "$TEST_TRANSPORT", 00:19:19.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.373 "adrfam": "ipv4", 00:19:19.373 "trsvcid": "$NVMF_PORT", 00:19:19.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.373 "hdgst": ${hdgst:-false}, 00:19:19.373 "ddgst": ${ddgst:-false} 00:19:19.373 }, 00:19:19.373 "method": "bdev_nvme_attach_controller" 00:19:19.373 } 00:19:19.373 EOF 00:19:19.373 )") 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:19.373 "params": { 00:19:19.373 "name": "Nvme0", 00:19:19.373 "trtype": "tcp", 00:19:19.373 "traddr": "10.0.0.2", 00:19:19.373 "adrfam": "ipv4", 00:19:19.373 "trsvcid": "4420", 00:19:19.373 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:19.373 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:19.373 "hdgst": false, 00:19:19.373 "ddgst": false 00:19:19.373 }, 00:19:19.373 "method": "bdev_nvme_attach_controller" 00:19:19.373 }' 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:19.373 22:30:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:19.632 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:19.632 fio-3.35 00:19:19.632 Starting 1 thread 00:19:31.856 00:19:31.856 filename0: (groupid=0, jobs=1): err= 0: pid=82843: Mon Jul 15 22:30:43 2024 00:19:31.857 read: IOPS=12.2k, BW=47.7MiB/s (50.0MB/s)(477MiB/10001msec) 00:19:31.857 slat (nsec): min=5653, max=46713, avg=6068.78, stdev=854.17 00:19:31.857 clat (usec): min=281, max=2510, avg=310.95, stdev=19.83 00:19:31.857 lat (usec): min=287, max=2544, avg=317.02, stdev=20.05 00:19:31.857 clat percentiles (usec): 00:19:31.857 | 1.00th=[ 293], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 302], 00:19:31.857 | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 310], 60.00th=[ 314], 00:19:31.857 | 70.00th=[ 314], 80.00th=[ 322], 90.00th=[ 326], 95.00th=[ 330], 00:19:31.857 | 99.00th=[ 338], 99.50th=[ 347], 99.90th=[ 408], 99.95th=[ 494], 00:19:31.857 | 99.99th=[ 898] 00:19:31.857 bw ( KiB/s): min=48128, max=49120, per=100.00%, avg=48889.26, stdev=250.29, samples=19 00:19:31.857 iops : min=12032, max=12280, avg=12222.32, stdev=62.57, samples=19 00:19:31.857 lat (usec) : 500=99.95%, 750=0.03%, 1000=0.01% 00:19:31.857 lat (msec) : 2=0.01%, 4=0.01% 00:19:31.857 cpu : usr=81.72%, sys=16.96%, ctx=55, majf=0, minf=0 00:19:31.857 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.857 issued rwts: total=122140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.857 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:31.857 00:19:31.857 Run status group 0 (all jobs): 00:19:31.857 READ: bw=47.7MiB/s (50.0MB/s), 47.7MiB/s-47.7MiB/s (50.0MB/s-50.0MB/s), io=477MiB (500MB), run=10001-10001msec 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.857 00:19:31.857 real 0m10.959s 00:19:31.857 user 0m8.777s 00:19:31.857 sys 0m2.007s 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:31.857 ************************************ 00:19:31.857 END TEST fio_dif_1_default 00:19:31.857 ************************************ 00:19:31.857 22:30:43 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:19:31.857 22:30:43 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:19:31.857 22:30:43 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:31.857 22:30:43 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:31.857 22:30:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:31.857 ************************************ 00:19:31.857 START TEST fio_dif_1_multi_subsystems 00:19:31.857 ************************************ 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:31.857 bdev_null0 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:31.857 [2024-07-15 22:30:43.844805] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:31.857 bdev_null1 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:31.857 { 00:19:31.857 "params": { 00:19:31.857 "name": "Nvme$subsystem", 00:19:31.857 "trtype": "$TEST_TRANSPORT", 00:19:31.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.857 "adrfam": "ipv4", 00:19:31.857 "trsvcid": "$NVMF_PORT", 00:19:31.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.857 "hdgst": ${hdgst:-false}, 00:19:31.857 "ddgst": ${ddgst:-false} 00:19:31.857 }, 00:19:31.857 "method": "bdev_nvme_attach_controller" 00:19:31.857 } 00:19:31.857 EOF 00:19:31.857 )") 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:31.857 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:31.857 { 00:19:31.857 "params": { 00:19:31.857 "name": "Nvme$subsystem", 00:19:31.857 "trtype": "$TEST_TRANSPORT", 00:19:31.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.858 "adrfam": "ipv4", 00:19:31.858 "trsvcid": "$NVMF_PORT", 00:19:31.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.858 "hdgst": ${hdgst:-false}, 00:19:31.858 "ddgst": ${ddgst:-false} 00:19:31.858 }, 00:19:31.858 "method": "bdev_nvme_attach_controller" 00:19:31.858 } 00:19:31.858 EOF 00:19:31.858 )") 00:19:31.858 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:19:31.858 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:31.858 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:19:31.858 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:19:31.858 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:19:31.858 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:31.858 "params": { 00:19:31.858 "name": "Nvme0", 00:19:31.858 "trtype": "tcp", 00:19:31.858 "traddr": "10.0.0.2", 00:19:31.858 "adrfam": "ipv4", 00:19:31.858 "trsvcid": "4420", 00:19:31.858 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:31.858 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:31.858 "hdgst": false, 00:19:31.858 "ddgst": false 00:19:31.858 }, 00:19:31.858 "method": "bdev_nvme_attach_controller" 00:19:31.858 },{ 00:19:31.858 "params": { 00:19:31.858 "name": "Nvme1", 00:19:31.858 "trtype": "tcp", 00:19:31.858 "traddr": "10.0.0.2", 00:19:31.858 "adrfam": "ipv4", 00:19:31.858 "trsvcid": "4420", 00:19:31.858 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.858 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:31.858 "hdgst": false, 00:19:31.858 "ddgst": false 00:19:31.858 }, 00:19:31.858 "method": "bdev_nvme_attach_controller" 00:19:31.858 }' 00:19:31.858 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:31.858 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:31.858 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:31.858 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:31.858 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:31.858 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:31.858 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:31.858 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:31.858 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:31.858 22:30:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:31.858 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:31.858 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:31.858 fio-3.35 00:19:31.858 Starting 2 threads 00:19:41.856 00:19:41.856 filename0: (groupid=0, jobs=1): err= 0: pid=83001: Mon Jul 15 22:30:54 2024 00:19:41.856 read: IOPS=6389, BW=25.0MiB/s (26.2MB/s)(250MiB/10001msec) 00:19:41.856 slat (nsec): min=5808, max=38342, avg=11397.85, stdev=2859.25 00:19:41.856 clat (usec): min=488, max=1457, avg=596.26, stdev=28.30 00:19:41.856 lat (usec): min=496, max=1495, avg=607.65, stdev=29.34 00:19:41.856 clat percentiles (usec): 00:19:41.856 | 1.00th=[ 529], 5.00th=[ 545], 10.00th=[ 553], 20.00th=[ 578], 00:19:41.856 | 30.00th=[ 586], 40.00th=[ 594], 50.00th=[ 603], 60.00th=[ 603], 00:19:41.856 | 70.00th=[ 611], 80.00th=[ 619], 90.00th=[ 627], 95.00th=[ 635], 00:19:41.856 | 99.00th=[ 652], 99.50th=[ 668], 99.90th=[ 709], 99.95th=[ 734], 00:19:41.856 | 99.99th=[ 996] 00:19:41.856 bw ( KiB/s): min=25216, max=25792, per=50.07%, avg=25597.26, stdev=147.27, samples=19 00:19:41.856 iops : min= 6304, max= 6448, avg=6399.32, stdev=36.82, samples=19 00:19:41.856 lat (usec) : 500=0.01%, 750=99.96%, 1000=0.03% 00:19:41.857 lat (msec) : 2=0.01% 00:19:41.857 cpu : usr=89.22%, sys=9.80%, ctx=11, majf=0, minf=9 00:19:41.857 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:41.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.857 issued rwts: total=63904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.857 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:41.857 filename1: (groupid=0, jobs=1): err= 0: pid=83002: Mon Jul 15 22:30:54 2024 00:19:41.857 read: IOPS=6389, BW=25.0MiB/s (26.2MB/s)(250MiB/10001msec) 00:19:41.857 slat (nsec): min=5839, max=60602, avg=11300.85, stdev=2827.74 00:19:41.857 clat (usec): min=529, max=1298, avg=596.22, stdev=22.01 00:19:41.857 lat (usec): min=540, max=1358, avg=607.53, stdev=22.16 00:19:41.857 clat percentiles (usec): 00:19:41.857 | 1.00th=[ 553], 5.00th=[ 562], 10.00th=[ 570], 20.00th=[ 578], 00:19:41.857 | 30.00th=[ 586], 40.00th=[ 594], 50.00th=[ 594], 60.00th=[ 603], 00:19:41.857 | 70.00th=[ 611], 80.00th=[ 611], 90.00th=[ 619], 95.00th=[ 627], 00:19:41.857 | 99.00th=[ 652], 99.50th=[ 660], 99.90th=[ 709], 99.95th=[ 742], 00:19:41.857 | 99.99th=[ 1037] 00:19:41.857 bw ( KiB/s): min=25216, max=25792, per=50.08%, avg=25600.00, stdev=147.42, samples=19 00:19:41.857 iops : min= 6304, max= 6448, avg=6400.00, stdev=36.85, samples=19 00:19:41.857 lat (usec) : 750=99.96%, 1000=0.03% 00:19:41.857 lat (msec) : 2=0.01% 00:19:41.857 cpu : usr=89.81%, sys=9.23%, ctx=21, majf=0, minf=0 00:19:41.857 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:41.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.857 issued rwts: total=63904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.857 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:41.857 00:19:41.857 Run status group 0 (all jobs): 00:19:41.857 READ: bw=49.9MiB/s (52.3MB/s), 25.0MiB/s-25.0MiB/s (26.2MB/s-26.2MB/s), io=499MiB (524MB), run=10001-10001msec 00:19:41.857 22:30:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:19:41.857 22:30:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:19:41.857 22:30:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:19:41.857 22:30:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:41.857 22:30:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:19:41.857 22:30:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:41.857 22:30:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.857 22:30:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:41.857 22:30:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.857 22:30:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:41.857 22:30:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.857 22:30:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:41.857 22:30:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.857 22:30:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:19:41.857 22:30:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:41.857 22:30:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:19:41.857 22:30:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:41.857 22:30:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.857 22:30:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:41.857 22:30:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.857 22:30:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:41.857 22:30:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.857 22:30:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:41.857 22:30:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.857 00:19:41.857 real 0m11.106s 00:19:41.857 user 0m18.641s 00:19:41.857 sys 0m2.220s 00:19:41.857 22:30:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:41.857 22:30:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:41.857 ************************************ 00:19:41.857 END TEST fio_dif_1_multi_subsystems 00:19:41.857 ************************************ 00:19:41.857 22:30:54 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:19:41.857 22:30:54 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:19:41.857 22:30:54 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:41.857 22:30:54 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:41.857 22:30:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:41.857 ************************************ 00:19:41.857 START TEST fio_dif_rand_params 00:19:41.857 ************************************ 00:19:41.857 22:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:19:41.857 22:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:19:41.857 22:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:19:41.857 22:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:19:41.857 22:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:19:41.857 22:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:19:41.857 22:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:19:41.857 22:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:19:41.857 22:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:19:41.857 22:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:41.857 22:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:41.857 22:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:41.857 22:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:41.857 22:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:41.857 22:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.857 22:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:41.857 bdev_null0 00:19:41.857 22:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.857 22:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:41.857 22:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.857 22:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:41.857 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.857 22:30:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:41.857 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.857 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:41.857 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.857 22:30:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:41.857 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.857 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:41.857 [2024-07-15 22:30:55.024656] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.857 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.857 22:30:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:19:41.857 22:30:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:19:41.857 22:30:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:41.857 22:30:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:19:41.857 22:30:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:19:41.857 22:30:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:41.857 22:30:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.858 { 00:19:41.858 "params": { 00:19:41.858 "name": "Nvme$subsystem", 00:19:41.858 "trtype": "$TEST_TRANSPORT", 00:19:41.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.858 "adrfam": "ipv4", 00:19:41.858 "trsvcid": "$NVMF_PORT", 00:19:41.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.858 "hdgst": ${hdgst:-false}, 00:19:41.858 "ddgst": ${ddgst:-false} 00:19:41.858 }, 00:19:41.858 "method": "bdev_nvme_attach_controller" 00:19:41.858 } 00:19:41.858 EOF 00:19:41.858 )") 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:41.858 "params": { 00:19:41.858 "name": "Nvme0", 00:19:41.858 "trtype": "tcp", 00:19:41.858 "traddr": "10.0.0.2", 00:19:41.858 "adrfam": "ipv4", 00:19:41.858 "trsvcid": "4420", 00:19:41.858 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:41.858 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:41.858 "hdgst": false, 00:19:41.858 "ddgst": false 00:19:41.858 }, 00:19:41.858 "method": "bdev_nvme_attach_controller" 00:19:41.858 }' 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:41.858 22:30:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:41.858 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:41.858 ... 00:19:41.858 fio-3.35 00:19:41.858 Starting 3 threads 00:19:47.167 00:19:47.167 filename0: (groupid=0, jobs=1): err= 0: pid=83162: Mon Jul 15 22:31:00 2024 00:19:47.167 read: IOPS=332, BW=41.6MiB/s (43.6MB/s)(208MiB/5002msec) 00:19:47.167 slat (nsec): min=5912, max=32994, avg=9073.38, stdev=3447.19 00:19:47.167 clat (usec): min=7305, max=9776, avg=8989.11, stdev=125.65 00:19:47.167 lat (usec): min=7315, max=9791, avg=8998.19, stdev=125.72 00:19:47.167 clat percentiles (usec): 00:19:47.167 | 1.00th=[ 8848], 5.00th=[ 8979], 10.00th=[ 8979], 20.00th=[ 8979], 00:19:47.167 | 30.00th=[ 8979], 40.00th=[ 8979], 50.00th=[ 8979], 60.00th=[ 8979], 00:19:47.167 | 70.00th=[ 8979], 80.00th=[ 8979], 90.00th=[ 9110], 95.00th=[ 9241], 00:19:47.167 | 99.00th=[ 9372], 99.50th=[ 9634], 99.90th=[ 9765], 99.95th=[ 9765], 00:19:47.167 | 99.99th=[ 9765] 00:19:47.167 bw ( KiB/s): min=42240, max=43008, per=33.32%, avg=42581.33, stdev=404.77, samples=9 00:19:47.167 iops : min= 330, max= 336, avg=332.67, stdev= 3.16, samples=9 00:19:47.167 lat (msec) : 10=100.00% 00:19:47.167 cpu : usr=89.28%, sys=10.30%, ctx=12, majf=0, minf=0 00:19:47.167 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:47.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.167 issued rwts: total=1665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.167 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:47.167 filename0: (groupid=0, jobs=1): err= 0: pid=83163: Mon Jul 15 22:31:00 2024 00:19:47.167 read: IOPS=332, BW=41.6MiB/s (43.6MB/s)(208MiB/5003msec) 00:19:47.167 slat (nsec): min=6197, max=57428, avg=14244.10, stdev=3471.85 00:19:47.167 clat (usec): min=6389, max=9922, avg=8981.11, stdev=157.66 00:19:47.167 lat (usec): min=6402, max=9980, avg=8995.36, stdev=158.12 00:19:47.167 clat percentiles (usec): 00:19:47.167 | 1.00th=[ 8848], 5.00th=[ 8848], 10.00th=[ 8848], 20.00th=[ 8979], 00:19:47.167 | 30.00th=[ 8979], 40.00th=[ 8979], 50.00th=[ 8979], 60.00th=[ 8979], 00:19:47.167 | 70.00th=[ 8979], 80.00th=[ 8979], 90.00th=[ 9110], 95.00th=[ 9241], 00:19:47.167 | 99.00th=[ 9503], 99.50th=[ 9634], 99.90th=[ 9896], 99.95th=[ 9896], 00:19:47.167 | 99.99th=[ 9896] 00:19:47.167 bw ( KiB/s): min=42240, max=43008, per=33.32%, avg=42581.33, stdev=404.77, samples=9 00:19:47.167 iops : min= 330, max= 336, avg=332.67, stdev= 3.16, samples=9 00:19:47.167 lat (msec) : 10=100.00% 00:19:47.167 cpu : usr=90.16%, sys=9.44%, ctx=9, majf=0, minf=9 00:19:47.167 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:47.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.167 issued rwts: total=1665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.167 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:47.167 filename0: (groupid=0, jobs=1): err= 0: pid=83164: Mon Jul 15 22:31:00 2024 00:19:47.167 read: IOPS=332, BW=41.6MiB/s (43.6MB/s)(208MiB/5002msec) 00:19:47.167 slat (nsec): min=6023, max=30164, avg=14019.64, stdev=3337.40 00:19:47.167 clat (usec): min=6389, max=10981, avg=8980.70, stdev=176.92 00:19:47.167 lat (usec): min=6402, max=11007, avg=8994.72, stdev=177.15 00:19:47.167 clat percentiles (usec): 00:19:47.167 | 1.00th=[ 8848], 5.00th=[ 8848], 10.00th=[ 8848], 20.00th=[ 8979], 00:19:47.167 | 30.00th=[ 8979], 40.00th=[ 8979], 50.00th=[ 8979], 60.00th=[ 8979], 00:19:47.167 | 70.00th=[ 8979], 80.00th=[ 8979], 90.00th=[ 9110], 95.00th=[ 9241], 00:19:47.167 | 99.00th=[ 9503], 99.50th=[ 9634], 99.90th=[10945], 99.95th=[10945], 00:19:47.167 | 99.99th=[10945] 00:19:47.168 bw ( KiB/s): min=42240, max=43008, per=33.32%, avg=42581.33, stdev=404.77, samples=9 00:19:47.168 iops : min= 330, max= 336, avg=332.67, stdev= 3.16, samples=9 00:19:47.168 lat (msec) : 10=99.82%, 20=0.18% 00:19:47.168 cpu : usr=90.00%, sys=9.26%, ctx=54, majf=0, minf=0 00:19:47.168 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:47.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.168 issued rwts: total=1665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.168 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:47.168 00:19:47.168 Run status group 0 (all jobs): 00:19:47.168 READ: bw=125MiB/s (131MB/s), 41.6MiB/s-41.6MiB/s (43.6MB/s-43.6MB/s), io=624MiB (655MB), run=5002-5003msec 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:47.427 bdev_null0 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.427 22:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:47.427 [2024-07-15 22:31:01.001445] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:47.427 bdev_null1 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:47.427 bdev_null2 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.427 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.687 { 00:19:47.687 "params": { 00:19:47.687 "name": "Nvme$subsystem", 00:19:47.687 "trtype": "$TEST_TRANSPORT", 00:19:47.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.687 "adrfam": "ipv4", 00:19:47.687 "trsvcid": "$NVMF_PORT", 00:19:47.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.687 "hdgst": ${hdgst:-false}, 00:19:47.687 "ddgst": ${ddgst:-false} 00:19:47.687 }, 00:19:47.687 "method": "bdev_nvme_attach_controller" 00:19:47.687 } 00:19:47.687 EOF 00:19:47.687 )") 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.687 { 00:19:47.687 "params": { 00:19:47.687 "name": "Nvme$subsystem", 00:19:47.687 "trtype": "$TEST_TRANSPORT", 00:19:47.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.687 "adrfam": "ipv4", 00:19:47.687 "trsvcid": "$NVMF_PORT", 00:19:47.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.687 "hdgst": ${hdgst:-false}, 00:19:47.687 "ddgst": ${ddgst:-false} 00:19:47.687 }, 00:19:47.687 "method": "bdev_nvme_attach_controller" 00:19:47.687 } 00:19:47.687 EOF 00:19:47.687 )") 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.687 { 00:19:47.687 "params": { 00:19:47.687 "name": "Nvme$subsystem", 00:19:47.687 "trtype": "$TEST_TRANSPORT", 00:19:47.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.687 "adrfam": "ipv4", 00:19:47.687 "trsvcid": "$NVMF_PORT", 00:19:47.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.687 "hdgst": ${hdgst:-false}, 00:19:47.687 "ddgst": ${ddgst:-false} 00:19:47.687 }, 00:19:47.687 "method": "bdev_nvme_attach_controller" 00:19:47.687 } 00:19:47.687 EOF 00:19:47.687 )") 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:47.687 "params": { 00:19:47.687 "name": "Nvme0", 00:19:47.687 "trtype": "tcp", 00:19:47.687 "traddr": "10.0.0.2", 00:19:47.687 "adrfam": "ipv4", 00:19:47.687 "trsvcid": "4420", 00:19:47.687 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:47.687 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:47.687 "hdgst": false, 00:19:47.687 "ddgst": false 00:19:47.687 }, 00:19:47.687 "method": "bdev_nvme_attach_controller" 00:19:47.687 },{ 00:19:47.687 "params": { 00:19:47.687 "name": "Nvme1", 00:19:47.687 "trtype": "tcp", 00:19:47.687 "traddr": "10.0.0.2", 00:19:47.687 "adrfam": "ipv4", 00:19:47.687 "trsvcid": "4420", 00:19:47.687 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.687 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:47.687 "hdgst": false, 00:19:47.687 "ddgst": false 00:19:47.687 }, 00:19:47.687 "method": "bdev_nvme_attach_controller" 00:19:47.687 },{ 00:19:47.687 "params": { 00:19:47.687 "name": "Nvme2", 00:19:47.687 "trtype": "tcp", 00:19:47.687 "traddr": "10.0.0.2", 00:19:47.687 "adrfam": "ipv4", 00:19:47.687 "trsvcid": "4420", 00:19:47.687 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:47.687 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:47.687 "hdgst": false, 00:19:47.687 "ddgst": false 00:19:47.687 }, 00:19:47.687 "method": "bdev_nvme_attach_controller" 00:19:47.687 }' 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:47.687 22:31:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:47.687 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:47.687 ... 00:19:47.687 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:47.687 ... 00:19:47.687 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:47.687 ... 00:19:47.687 fio-3.35 00:19:47.687 Starting 24 threads 00:19:59.927 00:19:59.927 filename0: (groupid=0, jobs=1): err= 0: pid=83260: Mon Jul 15 22:31:12 2024 00:19:59.927 read: IOPS=291, BW=1164KiB/s (1192kB/s)(11.4MiB/10020msec) 00:19:59.927 slat (usec): min=2, max=8021, avg=26.15, stdev=305.17 00:19:59.927 clat (msec): min=12, max=106, avg=54.82, stdev=16.21 00:19:59.927 lat (msec): min=12, max=106, avg=54.85, stdev=16.20 00:19:59.927 clat percentiles (msec): 00:19:59.927 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 40], 00:19:59.927 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 55], 60.00th=[ 59], 00:19:59.927 | 70.00th=[ 61], 80.00th=[ 66], 90.00th=[ 82], 95.00th=[ 86], 00:19:59.927 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 105], 99.95th=[ 105], 00:19:59.927 | 99.99th=[ 108] 00:19:59.927 bw ( KiB/s): min= 912, max= 1728, per=4.18%, avg=1162.45, stdev=162.28, samples=20 00:19:59.927 iops : min= 228, max= 432, avg=290.60, stdev=40.58, samples=20 00:19:59.927 lat (msec) : 20=0.31%, 50=43.13%, 100=56.29%, 250=0.27% 00:19:59.927 cpu : usr=31.99%, sys=2.24%, ctx=980, majf=0, minf=9 00:19:59.927 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=82.7%, 16=16.8%, 32=0.0%, >=64=0.0% 00:19:59.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.927 complete : 0=0.0%, 4=87.7%, 8=12.2%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.927 issued rwts: total=2917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:59.927 filename0: (groupid=0, jobs=1): err= 0: pid=83261: Mon Jul 15 22:31:12 2024 00:19:59.927 read: IOPS=296, BW=1184KiB/s (1213kB/s)(11.6MiB/10002msec) 00:19:59.927 slat (usec): min=6, max=8031, avg=22.66, stdev=254.95 00:19:59.927 clat (usec): min=1975, max=113894, avg=53965.87, stdev=16757.20 00:19:59.927 lat (usec): min=1992, max=113909, avg=53988.52, stdev=16757.23 00:19:59.927 clat percentiles (msec): 00:19:59.927 | 1.00th=[ 24], 5.00th=[ 31], 10.00th=[ 35], 20.00th=[ 38], 00:19:59.927 | 30.00th=[ 47], 40.00th=[ 48], 50.00th=[ 54], 60.00th=[ 58], 00:19:59.927 | 70.00th=[ 61], 80.00th=[ 64], 90.00th=[ 82], 95.00th=[ 85], 00:19:59.927 | 99.00th=[ 96], 99.50th=[ 114], 99.90th=[ 114], 99.95th=[ 114], 00:19:59.927 | 99.99th=[ 114] 00:19:59.927 bw ( KiB/s): min= 888, max= 1736, per=4.21%, avg=1168.05, stdev=169.71, samples=19 00:19:59.927 iops : min= 222, max= 434, avg=292.00, stdev=42.44, samples=19 00:19:59.927 lat (msec) : 2=0.03%, 10=0.24%, 20=0.47%, 50=44.11%, 100=54.61% 00:19:59.927 lat (msec) : 250=0.54% 00:19:59.927 cpu : usr=31.47%, sys=2.21%, ctx=945, majf=0, minf=9 00:19:59.927 IO depths : 1=0.1%, 2=0.3%, 4=0.8%, 8=82.5%, 16=16.3%, 32=0.0%, >=64=0.0% 00:19:59.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.927 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.927 issued rwts: total=2961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:59.927 filename0: (groupid=0, jobs=1): err= 0: pid=83262: Mon Jul 15 22:31:12 2024 00:19:59.927 read: IOPS=293, BW=1174KiB/s (1203kB/s)(11.5MiB/10030msec) 00:19:59.927 slat (usec): min=3, max=8019, avg=22.41, stdev=255.95 00:19:59.927 clat (msec): min=15, max=104, avg=54.36, stdev=15.83 00:19:59.927 lat (msec): min=15, max=104, avg=54.38, stdev=15.82 00:19:59.927 clat percentiles (msec): 00:19:59.927 | 1.00th=[ 23], 5.00th=[ 31], 10.00th=[ 35], 20.00th=[ 40], 00:19:59.927 | 30.00th=[ 47], 40.00th=[ 50], 50.00th=[ 55], 60.00th=[ 58], 00:19:59.927 | 70.00th=[ 61], 80.00th=[ 66], 90.00th=[ 80], 95.00th=[ 86], 00:19:59.927 | 99.00th=[ 93], 99.50th=[ 94], 99.90th=[ 105], 99.95th=[ 105], 00:19:59.927 | 99.99th=[ 105] 00:19:59.927 bw ( KiB/s): min= 1000, max= 1648, per=4.22%, avg=1171.25, stdev=150.03, samples=20 00:19:59.927 iops : min= 250, max= 412, avg=292.80, stdev=37.51, samples=20 00:19:59.927 lat (msec) : 20=0.27%, 50=40.78%, 100=58.78%, 250=0.17% 00:19:59.927 cpu : usr=37.94%, sys=2.66%, ctx=1149, majf=0, minf=9 00:19:59.927 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.6%, 16=16.8%, 32=0.0%, >=64=0.0% 00:19:59.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.927 complete : 0=0.0%, 4=87.8%, 8=12.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.927 issued rwts: total=2945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:59.927 filename0: (groupid=0, jobs=1): err= 0: pid=83263: Mon Jul 15 22:31:12 2024 00:19:59.927 read: IOPS=299, BW=1199KiB/s (1228kB/s)(11.7MiB/10005msec) 00:19:59.927 slat (usec): min=2, max=8024, avg=27.85, stdev=309.97 00:19:59.927 clat (msec): min=5, max=109, avg=53.26, stdev=16.84 00:19:59.927 lat (msec): min=5, max=109, avg=53.29, stdev=16.84 00:19:59.927 clat percentiles (msec): 00:19:59.927 | 1.00th=[ 21], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 39], 00:19:59.927 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 53], 60.00th=[ 56], 00:19:59.927 | 70.00th=[ 60], 80.00th=[ 64], 90.00th=[ 80], 95.00th=[ 86], 00:19:59.927 | 99.00th=[ 96], 99.50th=[ 103], 99.90th=[ 108], 99.95th=[ 108], 00:19:59.927 | 99.99th=[ 110] 00:19:59.927 bw ( KiB/s): min= 800, max= 1800, per=4.25%, avg=1182.00, stdev=186.56, samples=19 00:19:59.927 iops : min= 200, max= 450, avg=295.47, stdev=46.65, samples=19 00:19:59.927 lat (msec) : 10=0.20%, 20=0.70%, 50=43.75%, 100=54.62%, 250=0.73% 00:19:59.927 cpu : usr=38.06%, sys=2.78%, ctx=1134, majf=0, minf=9 00:19:59.927 IO depths : 1=0.1%, 2=0.5%, 4=2.1%, 8=81.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:59.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.927 complete : 0=0.0%, 4=87.6%, 8=11.9%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.927 issued rwts: total=2999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:59.927 filename0: (groupid=0, jobs=1): err= 0: pid=83264: Mon Jul 15 22:31:12 2024 00:19:59.927 read: IOPS=294, BW=1180KiB/s (1208kB/s)(11.6MiB/10042msec) 00:19:59.927 slat (usec): min=6, max=8021, avg=16.94, stdev=164.59 00:19:59.927 clat (msec): min=2, max=119, avg=54.11, stdev=18.54 00:19:59.927 lat (msec): min=2, max=119, avg=54.13, stdev=18.54 00:19:59.927 clat percentiles (msec): 00:19:59.927 | 1.00th=[ 3], 5.00th=[ 24], 10.00th=[ 34], 20.00th=[ 41], 00:19:59.927 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 56], 60.00th=[ 59], 00:19:59.927 | 70.00th=[ 61], 80.00th=[ 68], 90.00th=[ 81], 95.00th=[ 86], 00:19:59.927 | 99.00th=[ 96], 99.50th=[ 97], 99.90th=[ 105], 99.95th=[ 111], 00:19:59.927 | 99.99th=[ 121] 00:19:59.927 bw ( KiB/s): min= 888, max= 2040, per=4.24%, avg=1178.40, stdev=256.34, samples=20 00:19:59.927 iops : min= 222, max= 510, avg=294.60, stdev=64.09, samples=20 00:19:59.927 lat (msec) : 4=1.72%, 10=1.76%, 20=0.37%, 50=36.50%, 100=59.55% 00:19:59.927 lat (msec) : 250=0.10% 00:19:59.927 cpu : usr=33.97%, sys=2.99%, ctx=1032, majf=0, minf=0 00:19:59.927 IO depths : 1=0.2%, 2=0.7%, 4=2.0%, 8=80.4%, 16=16.6%, 32=0.0%, >=64=0.0% 00:19:59.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.927 complete : 0=0.0%, 4=88.4%, 8=11.2%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.927 issued rwts: total=2962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:59.927 filename0: (groupid=0, jobs=1): err= 0: pid=83265: Mon Jul 15 22:31:12 2024 00:19:59.927 read: IOPS=277, BW=1110KiB/s (1137kB/s)(10.9MiB/10039msec) 00:19:59.927 slat (usec): min=6, max=6414, avg=27.99, stdev=253.65 00:19:59.927 clat (msec): min=10, max=116, avg=57.48, stdev=17.03 00:19:59.927 lat (msec): min=10, max=116, avg=57.51, stdev=17.04 00:19:59.927 clat percentiles (msec): 00:19:59.927 | 1.00th=[ 17], 5.00th=[ 31], 10.00th=[ 37], 20.00th=[ 45], 00:19:59.927 | 30.00th=[ 51], 40.00th=[ 54], 50.00th=[ 56], 60.00th=[ 60], 00:19:59.927 | 70.00th=[ 63], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 88], 00:19:59.927 | 99.00th=[ 96], 99.50th=[ 99], 99.90th=[ 116], 99.95th=[ 117], 00:19:59.927 | 99.99th=[ 117] 00:19:59.927 bw ( KiB/s): min= 784, max= 1776, per=3.99%, avg=1107.80, stdev=202.51, samples=20 00:19:59.927 iops : min= 196, max= 444, avg=276.95, stdev=50.63, samples=20 00:19:59.927 lat (msec) : 20=1.79%, 50=28.13%, 100=69.82%, 250=0.25% 00:19:59.927 cpu : usr=41.18%, sys=3.15%, ctx=1361, majf=0, minf=9 00:19:59.927 IO depths : 1=0.1%, 2=2.1%, 4=8.4%, 8=73.9%, 16=15.5%, 32=0.0%, >=64=0.0% 00:19:59.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.927 complete : 0=0.0%, 4=90.0%, 8=8.2%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.927 issued rwts: total=2787,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:59.927 filename0: (groupid=0, jobs=1): err= 0: pid=83266: Mon Jul 15 22:31:12 2024 00:19:59.927 read: IOPS=298, BW=1196KiB/s (1225kB/s)(11.7MiB/10027msec) 00:19:59.927 slat (usec): min=4, max=8022, avg=27.67, stdev=258.43 00:19:59.927 clat (msec): min=15, max=102, avg=53.38, stdev=16.46 00:19:59.927 lat (msec): min=15, max=102, avg=53.41, stdev=16.46 00:19:59.927 clat percentiles (msec): 00:19:59.927 | 1.00th=[ 19], 5.00th=[ 31], 10.00th=[ 34], 20.00th=[ 39], 00:19:59.927 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 54], 60.00th=[ 57], 00:19:59.927 | 70.00th=[ 60], 80.00th=[ 64], 90.00th=[ 80], 95.00th=[ 86], 00:19:59.927 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 103], 99.95th=[ 104], 00:19:59.927 | 99.99th=[ 104] 00:19:59.927 bw ( KiB/s): min= 1024, max= 1816, per=4.29%, avg=1192.80, stdev=169.94, samples=20 00:19:59.927 iops : min= 256, max= 454, avg=298.20, stdev=42.49, samples=20 00:19:59.927 lat (msec) : 20=1.27%, 50=42.33%, 100=56.30%, 250=0.10% 00:19:59.927 cpu : usr=40.26%, sys=2.82%, ctx=1333, majf=0, minf=9 00:19:59.927 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.5%, 16=16.1%, 32=0.0%, >=64=0.0% 00:19:59.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.928 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.928 issued rwts: total=2998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:59.928 filename0: (groupid=0, jobs=1): err= 0: pid=83267: Mon Jul 15 22:31:12 2024 00:19:59.928 read: IOPS=292, BW=1169KiB/s (1197kB/s)(11.5MiB/10036msec) 00:19:59.928 slat (usec): min=6, max=4046, avg=16.81, stdev=119.14 00:19:59.928 clat (msec): min=10, max=107, avg=54.65, stdev=16.77 00:19:59.928 lat (msec): min=10, max=107, avg=54.67, stdev=16.77 00:19:59.928 clat percentiles (msec): 00:19:59.928 | 1.00th=[ 17], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 40], 00:19:59.928 | 30.00th=[ 46], 40.00th=[ 52], 50.00th=[ 55], 60.00th=[ 58], 00:19:59.928 | 70.00th=[ 61], 80.00th=[ 68], 90.00th=[ 80], 95.00th=[ 87], 00:19:59.928 | 99.00th=[ 95], 99.50th=[ 95], 99.90th=[ 106], 99.95th=[ 107], 00:19:59.928 | 99.99th=[ 108] 00:19:59.928 bw ( KiB/s): min= 896, max= 1784, per=4.19%, avg=1165.80, stdev=190.86, samples=20 00:19:59.928 iops : min= 224, max= 446, avg=291.45, stdev=47.72, samples=20 00:19:59.928 lat (msec) : 20=1.16%, 50=35.91%, 100=62.76%, 250=0.17% 00:19:59.928 cpu : usr=44.28%, sys=3.34%, ctx=1386, majf=0, minf=9 00:19:59.928 IO depths : 1=0.1%, 2=0.8%, 4=3.1%, 8=79.8%, 16=16.2%, 32=0.0%, >=64=0.0% 00:19:59.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.928 complete : 0=0.0%, 4=88.3%, 8=11.0%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.928 issued rwts: total=2932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:59.928 filename1: (groupid=0, jobs=1): err= 0: pid=83268: Mon Jul 15 22:31:12 2024 00:19:59.928 read: IOPS=291, BW=1165KiB/s (1193kB/s)(11.4MiB/10035msec) 00:19:59.928 slat (usec): min=3, max=7030, avg=22.86, stdev=210.20 00:19:59.928 clat (msec): min=8, max=117, avg=54.78, stdev=17.15 00:19:59.928 lat (msec): min=8, max=117, avg=54.80, stdev=17.15 00:19:59.928 clat percentiles (msec): 00:19:59.928 | 1.00th=[ 17], 5.00th=[ 31], 10.00th=[ 35], 20.00th=[ 40], 00:19:59.928 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 55], 60.00th=[ 57], 00:19:59.928 | 70.00th=[ 61], 80.00th=[ 66], 90.00th=[ 82], 95.00th=[ 87], 00:19:59.928 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 111], 99.95th=[ 118], 00:19:59.928 | 99.99th=[ 118] 00:19:59.928 bw ( KiB/s): min= 896, max= 1840, per=4.18%, avg=1162.40, stdev=199.17, samples=20 00:19:59.928 iops : min= 224, max= 460, avg=290.60, stdev=49.79, samples=20 00:19:59.928 lat (msec) : 10=0.51%, 20=1.64%, 50=36.33%, 100=61.31%, 250=0.21% 00:19:59.928 cpu : usr=43.37%, sys=3.60%, ctx=1485, majf=0, minf=9 00:19:59.928 IO depths : 1=0.1%, 2=1.5%, 4=6.0%, 8=77.0%, 16=15.3%, 32=0.0%, >=64=0.0% 00:19:59.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.928 complete : 0=0.0%, 4=88.8%, 8=9.9%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.928 issued rwts: total=2923,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:59.928 filename1: (groupid=0, jobs=1): err= 0: pid=83269: Mon Jul 15 22:31:12 2024 00:19:59.928 read: IOPS=298, BW=1194KiB/s (1223kB/s)(11.7MiB/10041msec) 00:19:59.928 slat (usec): min=4, max=8011, avg=20.39, stdev=217.14 00:19:59.928 clat (usec): min=1221, max=119918, avg=53443.68, stdev=18592.29 00:19:59.928 lat (usec): min=1228, max=119932, avg=53464.06, stdev=18593.38 00:19:59.928 clat percentiles (usec): 00:19:59.928 | 1.00th=[ 1500], 5.00th=[ 23987], 10.00th=[ 33162], 20.00th=[ 40633], 00:19:59.928 | 30.00th=[ 47449], 40.00th=[ 49546], 50.00th=[ 55837], 60.00th=[ 58983], 00:19:59.928 | 70.00th=[ 60031], 80.00th=[ 63177], 90.00th=[ 79168], 95.00th=[ 84411], 00:19:59.928 | 99.00th=[ 93848], 99.50th=[ 95945], 99.90th=[104334], 99.95th=[106431], 00:19:59.928 | 99.99th=[120062] 00:19:59.928 bw ( KiB/s): min= 920, max= 2304, per=4.29%, avg=1192.40, stdev=305.00, samples=20 00:19:59.928 iops : min= 230, max= 576, avg=298.10, stdev=76.25, samples=20 00:19:59.928 lat (msec) : 2=1.07%, 4=2.14%, 10=0.53%, 20=0.80%, 50=36.84% 00:19:59.928 lat (msec) : 100=58.53%, 250=0.10% 00:19:59.928 cpu : usr=35.41%, sys=2.46%, ctx=1424, majf=0, minf=0 00:19:59.928 IO depths : 1=0.2%, 2=0.8%, 4=2.8%, 8=79.7%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:59.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.928 complete : 0=0.0%, 4=88.5%, 8=10.8%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.928 issued rwts: total=2997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:59.928 filename1: (groupid=0, jobs=1): err= 0: pid=83270: Mon Jul 15 22:31:12 2024 00:19:59.928 read: IOPS=283, BW=1134KiB/s (1162kB/s)(11.1MiB/10039msec) 00:19:59.928 slat (usec): min=6, max=8022, avg=25.17, stdev=259.95 00:19:59.928 clat (msec): min=14, max=119, avg=56.29, stdev=16.70 00:19:59.928 lat (msec): min=14, max=119, avg=56.31, stdev=16.70 00:19:59.928 clat percentiles (msec): 00:19:59.928 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 45], 00:19:59.928 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 57], 60.00th=[ 60], 00:19:59.928 | 70.00th=[ 61], 80.00th=[ 69], 90.00th=[ 83], 95.00th=[ 86], 00:19:59.928 | 99.00th=[ 100], 99.50th=[ 107], 99.90th=[ 109], 99.95th=[ 110], 00:19:59.928 | 99.99th=[ 121] 00:19:59.928 bw ( KiB/s): min= 888, max= 1704, per=4.07%, avg=1131.90, stdev=171.07, samples=20 00:19:59.928 iops : min= 222, max= 426, avg=282.95, stdev=42.77, samples=20 00:19:59.928 lat (msec) : 20=0.95%, 50=34.67%, 100=63.65%, 250=0.74% 00:19:59.928 cpu : usr=34.93%, sys=2.58%, ctx=1006, majf=0, minf=9 00:19:59.928 IO depths : 1=0.1%, 2=0.6%, 4=2.1%, 8=80.5%, 16=16.7%, 32=0.0%, >=64=0.0% 00:19:59.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.928 complete : 0=0.0%, 4=88.3%, 8=11.2%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.928 issued rwts: total=2847,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:59.928 filename1: (groupid=0, jobs=1): err= 0: pid=83271: Mon Jul 15 22:31:12 2024 00:19:59.928 read: IOPS=278, BW=1113KiB/s (1139kB/s)(10.9MiB/10024msec) 00:19:59.928 slat (usec): min=3, max=8022, avg=22.52, stdev=229.36 00:19:59.928 clat (msec): min=15, max=126, avg=57.35, stdev=19.48 00:19:59.928 lat (msec): min=15, max=126, avg=57.38, stdev=19.48 00:19:59.928 clat percentiles (msec): 00:19:59.928 | 1.00th=[ 17], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 40], 00:19:59.928 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 56], 60.00th=[ 59], 00:19:59.928 | 70.00th=[ 62], 80.00th=[ 72], 90.00th=[ 86], 95.00th=[ 95], 00:19:59.928 | 99.00th=[ 108], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 127], 00:19:59.928 | 99.99th=[ 127] 00:19:59.928 bw ( KiB/s): min= 768, max= 1896, per=3.99%, avg=1108.80, stdev=244.39, samples=20 00:19:59.928 iops : min= 192, max= 474, avg=277.20, stdev=61.10, samples=20 00:19:59.928 lat (msec) : 20=1.54%, 50=34.58%, 100=61.51%, 250=2.37% 00:19:59.928 cpu : usr=38.36%, sys=2.67%, ctx=1102, majf=0, minf=9 00:19:59.928 IO depths : 1=0.1%, 2=1.8%, 4=7.1%, 8=75.9%, 16=15.2%, 32=0.0%, >=64=0.0% 00:19:59.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.928 complete : 0=0.0%, 4=89.1%, 8=9.3%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.928 issued rwts: total=2788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:59.928 filename1: (groupid=0, jobs=1): err= 0: pid=83272: Mon Jul 15 22:31:12 2024 00:19:59.928 read: IOPS=285, BW=1142KiB/s (1170kB/s)(11.2MiB/10035msec) 00:19:59.928 slat (usec): min=6, max=4038, avg=17.76, stdev=129.83 00:19:59.928 clat (msec): min=15, max=119, avg=55.91, stdev=16.82 00:19:59.928 lat (msec): min=15, max=119, avg=55.93, stdev=16.82 00:19:59.928 clat percentiles (msec): 00:19:59.928 | 1.00th=[ 20], 5.00th=[ 31], 10.00th=[ 36], 20.00th=[ 43], 00:19:59.928 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 56], 60.00th=[ 58], 00:19:59.928 | 70.00th=[ 61], 80.00th=[ 70], 90.00th=[ 82], 95.00th=[ 87], 00:19:59.928 | 99.00th=[ 96], 99.50th=[ 107], 99.90th=[ 109], 99.95th=[ 121], 00:19:59.928 | 99.99th=[ 121] 00:19:59.928 bw ( KiB/s): min= 896, max= 1760, per=4.10%, avg=1139.65, stdev=176.11, samples=20 00:19:59.928 iops : min= 224, max= 440, avg=284.90, stdev=44.03, samples=20 00:19:59.928 lat (msec) : 20=1.36%, 50=34.40%, 100=63.57%, 250=0.66% 00:19:59.928 cpu : usr=38.64%, sys=2.66%, ctx=1133, majf=0, minf=0 00:19:59.928 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=80.1%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:59.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.928 complete : 0=0.0%, 4=88.4%, 8=11.0%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.928 issued rwts: total=2866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:59.928 filename1: (groupid=0, jobs=1): err= 0: pid=83273: Mon Jul 15 22:31:12 2024 00:19:59.928 read: IOPS=292, BW=1169KiB/s (1197kB/s)(11.4MiB/10012msec) 00:19:59.928 slat (usec): min=2, max=9028, avg=21.65, stdev=235.76 00:19:59.928 clat (msec): min=12, max=120, avg=54.68, stdev=16.69 00:19:59.928 lat (msec): min=12, max=120, avg=54.71, stdev=16.69 00:19:59.928 clat percentiles (msec): 00:19:59.928 | 1.00th=[ 24], 5.00th=[ 31], 10.00th=[ 36], 20.00th=[ 39], 00:19:59.928 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 56], 60.00th=[ 59], 00:19:59.928 | 70.00th=[ 61], 80.00th=[ 66], 90.00th=[ 82], 95.00th=[ 85], 00:19:59.928 | 99.00th=[ 96], 99.50th=[ 97], 99.90th=[ 120], 99.95th=[ 121], 00:19:59.928 | 99.99th=[ 121] 00:19:59.928 bw ( KiB/s): min= 888, max= 1712, per=4.19%, avg=1163.10, stdev=159.71, samples=20 00:19:59.928 iops : min= 222, max= 428, avg=290.75, stdev=39.93, samples=20 00:19:59.928 lat (msec) : 20=0.89%, 50=41.57%, 100=57.33%, 250=0.21% 00:19:59.928 cpu : usr=30.97%, sys=2.21%, ctx=1141, majf=0, minf=9 00:19:59.928 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=82.7%, 16=16.8%, 32=0.0%, >=64=0.0% 00:19:59.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.928 complete : 0=0.0%, 4=87.7%, 8=12.2%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.928 issued rwts: total=2925,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:59.928 filename1: (groupid=0, jobs=1): err= 0: pid=83274: Mon Jul 15 22:31:12 2024 00:19:59.928 read: IOPS=270, BW=1080KiB/s (1106kB/s)(10.6MiB/10028msec) 00:19:59.928 slat (usec): min=6, max=8024, avg=27.49, stdev=319.74 00:19:59.928 clat (msec): min=12, max=128, avg=59.02, stdev=17.72 00:19:59.928 lat (msec): min=12, max=128, avg=59.05, stdev=17.72 00:19:59.928 clat percentiles (msec): 00:19:59.928 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 47], 00:19:59.928 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 61], 00:19:59.928 | 70.00th=[ 63], 80.00th=[ 73], 90.00th=[ 85], 95.00th=[ 94], 00:19:59.928 | 99.00th=[ 101], 99.50th=[ 101], 99.90th=[ 120], 99.95th=[ 129], 00:19:59.928 | 99.99th=[ 129] 00:19:59.928 bw ( KiB/s): min= 768, max= 1672, per=3.88%, avg=1078.65, stdev=199.79, samples=20 00:19:59.928 iops : min= 192, max= 418, avg=269.65, stdev=49.93, samples=20 00:19:59.928 lat (msec) : 20=0.59%, 50=32.72%, 100=65.95%, 250=0.74% 00:19:59.928 cpu : usr=35.74%, sys=2.41%, ctx=1019, majf=0, minf=9 00:19:59.929 IO depths : 1=0.1%, 2=1.9%, 4=7.5%, 8=74.6%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:59.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.929 complete : 0=0.0%, 4=89.9%, 8=8.4%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.929 issued rwts: total=2708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:59.929 filename1: (groupid=0, jobs=1): err= 0: pid=83275: Mon Jul 15 22:31:12 2024 00:19:59.929 read: IOPS=289, BW=1157KiB/s (1185kB/s)(11.3MiB/10027msec) 00:19:59.929 slat (usec): min=3, max=8025, avg=20.65, stdev=221.32 00:19:59.929 clat (msec): min=14, max=118, avg=55.19, stdev=15.95 00:19:59.929 lat (msec): min=14, max=118, avg=55.21, stdev=15.96 00:19:59.929 clat percentiles (msec): 00:19:59.929 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 42], 00:19:59.929 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 56], 60.00th=[ 59], 00:19:59.929 | 70.00th=[ 61], 80.00th=[ 68], 90.00th=[ 80], 95.00th=[ 85], 00:19:59.929 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 107], 99.95th=[ 112], 00:19:59.929 | 99.99th=[ 120] 00:19:59.929 bw ( KiB/s): min= 944, max= 1760, per=4.15%, avg=1153.65, stdev=168.64, samples=20 00:19:59.929 iops : min= 236, max= 440, avg=288.40, stdev=42.16, samples=20 00:19:59.929 lat (msec) : 20=0.14%, 50=42.36%, 100=57.36%, 250=0.14% 00:19:59.929 cpu : usr=32.55%, sys=2.38%, ctx=1017, majf=0, minf=9 00:19:59.929 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=81.2%, 16=16.7%, 32=0.0%, >=64=0.0% 00:19:59.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.929 complete : 0=0.0%, 4=88.1%, 8=11.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.929 issued rwts: total=2901,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:59.929 filename2: (groupid=0, jobs=1): err= 0: pid=83276: Mon Jul 15 22:31:12 2024 00:19:59.929 read: IOPS=294, BW=1176KiB/s (1205kB/s)(11.5MiB/10003msec) 00:19:59.929 slat (usec): min=2, max=8022, avg=20.64, stdev=221.47 00:19:59.929 clat (msec): min=3, max=150, avg=54.30, stdev=17.92 00:19:59.929 lat (msec): min=3, max=150, avg=54.32, stdev=17.91 00:19:59.929 clat percentiles (msec): 00:19:59.929 | 1.00th=[ 16], 5.00th=[ 27], 10.00th=[ 35], 20.00th=[ 38], 00:19:59.929 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 56], 60.00th=[ 59], 00:19:59.929 | 70.00th=[ 61], 80.00th=[ 68], 90.00th=[ 82], 95.00th=[ 85], 00:19:59.929 | 99.00th=[ 96], 99.50th=[ 127], 99.90th=[ 127], 99.95th=[ 150], 00:19:59.929 | 99.99th=[ 150] 00:19:59.929 bw ( KiB/s): min= 896, max= 1712, per=4.16%, avg=1156.21, stdev=185.51, samples=19 00:19:59.929 iops : min= 224, max= 428, avg=289.05, stdev=46.38, samples=19 00:19:59.929 lat (msec) : 4=0.34%, 10=0.31%, 20=0.85%, 50=42.05%, 100=55.51% 00:19:59.929 lat (msec) : 250=0.95% 00:19:59.929 cpu : usr=31.02%, sys=2.20%, ctx=1133, majf=0, minf=9 00:19:59.929 IO depths : 1=0.1%, 2=0.8%, 4=3.4%, 8=80.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:59.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.929 complete : 0=0.0%, 4=88.1%, 8=11.2%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.929 issued rwts: total=2942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:59.929 filename2: (groupid=0, jobs=1): err= 0: pid=83277: Mon Jul 15 22:31:12 2024 00:19:59.929 read: IOPS=282, BW=1129KiB/s (1157kB/s)(11.0MiB/10005msec) 00:19:59.929 slat (usec): min=6, max=4023, avg=18.11, stdev=130.20 00:19:59.929 clat (msec): min=12, max=125, avg=56.58, stdev=18.14 00:19:59.929 lat (msec): min=12, max=125, avg=56.59, stdev=18.14 00:19:59.929 clat percentiles (msec): 00:19:59.929 | 1.00th=[ 23], 5.00th=[ 31], 10.00th=[ 36], 20.00th=[ 41], 00:19:59.929 | 30.00th=[ 47], 40.00th=[ 53], 50.00th=[ 56], 60.00th=[ 59], 00:19:59.929 | 70.00th=[ 62], 80.00th=[ 70], 90.00th=[ 84], 95.00th=[ 92], 00:19:59.929 | 99.00th=[ 102], 99.50th=[ 110], 99.90th=[ 123], 99.95th=[ 126], 00:19:59.929 | 99.99th=[ 126] 00:19:59.929 bw ( KiB/s): min= 656, max= 1760, per=4.01%, avg=1114.53, stdev=216.28, samples=19 00:19:59.929 iops : min= 164, max= 440, avg=278.63, stdev=54.07, samples=19 00:19:59.929 lat (msec) : 20=0.60%, 50=36.78%, 100=61.59%, 250=1.03% 00:19:59.929 cpu : usr=38.33%, sys=2.52%, ctx=1191, majf=0, minf=9 00:19:59.929 IO depths : 1=0.1%, 2=1.3%, 4=5.2%, 8=77.7%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:59.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.929 complete : 0=0.0%, 4=88.8%, 8=10.1%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.929 issued rwts: total=2825,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:59.929 filename2: (groupid=0, jobs=1): err= 0: pid=83278: Mon Jul 15 22:31:12 2024 00:19:59.929 read: IOPS=289, BW=1159KiB/s (1187kB/s)(11.3MiB/10020msec) 00:19:59.929 slat (usec): min=3, max=8022, avg=23.60, stdev=260.73 00:19:59.929 clat (msec): min=13, max=116, avg=55.10, stdev=16.49 00:19:59.929 lat (msec): min=13, max=116, avg=55.12, stdev=16.49 00:19:59.929 clat percentiles (msec): 00:19:59.929 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 41], 00:19:59.929 | 30.00th=[ 47], 40.00th=[ 49], 50.00th=[ 56], 60.00th=[ 59], 00:19:59.929 | 70.00th=[ 61], 80.00th=[ 66], 90.00th=[ 83], 95.00th=[ 85], 00:19:59.929 | 99.00th=[ 96], 99.50th=[ 101], 99.90th=[ 109], 99.95th=[ 109], 00:19:59.929 | 99.99th=[ 117] 00:19:59.929 bw ( KiB/s): min= 888, max= 1680, per=4.17%, avg=1157.35, stdev=154.53, samples=20 00:19:59.929 iops : min= 222, max= 420, avg=289.30, stdev=38.64, samples=20 00:19:59.929 lat (msec) : 20=0.28%, 50=41.46%, 100=57.75%, 250=0.52% 00:19:59.929 cpu : usr=35.14%, sys=2.37%, ctx=1059, majf=0, minf=9 00:19:59.929 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=82.6%, 16=16.8%, 32=0.0%, >=64=0.0% 00:19:59.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.929 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.929 issued rwts: total=2904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:59.929 filename2: (groupid=0, jobs=1): err= 0: pid=83279: Mon Jul 15 22:31:12 2024 00:19:59.929 read: IOPS=283, BW=1134KiB/s (1161kB/s)(11.1MiB/10027msec) 00:19:59.929 slat (usec): min=2, max=9030, avg=28.40, stdev=288.46 00:19:59.929 clat (msec): min=12, max=121, avg=56.28, stdev=17.55 00:19:59.929 lat (msec): min=12, max=121, avg=56.31, stdev=17.55 00:19:59.929 clat percentiles (msec): 00:19:59.929 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 41], 00:19:59.929 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 56], 60.00th=[ 58], 00:19:59.929 | 70.00th=[ 61], 80.00th=[ 69], 90.00th=[ 84], 95.00th=[ 91], 00:19:59.929 | 99.00th=[ 97], 99.50th=[ 113], 99.90th=[ 121], 99.95th=[ 122], 00:19:59.929 | 99.99th=[ 122] 00:19:59.929 bw ( KiB/s): min= 768, max= 1840, per=4.07%, avg=1130.50, stdev=209.87, samples=20 00:19:59.929 iops : min= 192, max= 460, avg=282.60, stdev=52.46, samples=20 00:19:59.929 lat (msec) : 20=0.60%, 50=36.33%, 100=62.15%, 250=0.91% 00:19:59.929 cpu : usr=42.38%, sys=3.02%, ctx=1693, majf=0, minf=9 00:19:59.929 IO depths : 1=0.1%, 2=1.8%, 4=7.1%, 8=75.7%, 16=15.3%, 32=0.0%, >=64=0.0% 00:19:59.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.929 complete : 0=0.0%, 4=89.3%, 8=9.2%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.929 issued rwts: total=2843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:59.929 filename2: (groupid=0, jobs=1): err= 0: pid=83280: Mon Jul 15 22:31:12 2024 00:19:59.929 read: IOPS=292, BW=1168KiB/s (1196kB/s)(11.4MiB/10032msec) 00:19:59.929 slat (usec): min=3, max=10019, avg=26.81, stdev=324.36 00:19:59.929 clat (msec): min=13, max=106, avg=54.62, stdev=15.89 00:19:59.929 lat (msec): min=13, max=106, avg=54.65, stdev=15.89 00:19:59.929 clat percentiles (msec): 00:19:59.929 | 1.00th=[ 24], 5.00th=[ 29], 10.00th=[ 36], 20.00th=[ 42], 00:19:59.929 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 55], 60.00th=[ 59], 00:19:59.929 | 70.00th=[ 61], 80.00th=[ 64], 90.00th=[ 79], 95.00th=[ 85], 00:19:59.929 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 105], 99.95th=[ 106], 00:19:59.929 | 99.99th=[ 107] 00:19:59.929 bw ( KiB/s): min= 992, max= 1672, per=4.19%, avg=1165.20, stdev=158.84, samples=20 00:19:59.929 iops : min= 248, max= 418, avg=291.30, stdev=39.71, samples=20 00:19:59.929 lat (msec) : 20=0.20%, 50=42.05%, 100=57.65%, 250=0.10% 00:19:59.929 cpu : usr=31.08%, sys=2.19%, ctx=1170, majf=0, minf=9 00:19:59.929 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.5%, 16=16.8%, 32=0.0%, >=64=0.0% 00:19:59.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.929 complete : 0=0.0%, 4=87.8%, 8=12.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.929 issued rwts: total=2930,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:59.929 filename2: (groupid=0, jobs=1): err= 0: pid=83281: Mon Jul 15 22:31:12 2024 00:19:59.929 read: IOPS=296, BW=1185KiB/s (1213kB/s)(11.6MiB/10003msec) 00:19:59.929 slat (usec): min=2, max=8020, avg=20.26, stdev=194.62 00:19:59.929 clat (msec): min=2, max=112, avg=53.91, stdev=17.33 00:19:59.929 lat (msec): min=2, max=112, avg=53.93, stdev=17.33 00:19:59.929 clat percentiles (msec): 00:19:59.929 | 1.00th=[ 20], 5.00th=[ 31], 10.00th=[ 35], 20.00th=[ 39], 00:19:59.929 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 53], 60.00th=[ 57], 00:19:59.929 | 70.00th=[ 61], 80.00th=[ 67], 90.00th=[ 80], 95.00th=[ 87], 00:19:59.929 | 99.00th=[ 103], 99.50th=[ 104], 99.90th=[ 112], 99.95th=[ 113], 00:19:59.929 | 99.99th=[ 113] 00:19:59.929 bw ( KiB/s): min= 784, max= 1776, per=4.21%, avg=1168.42, stdev=196.62, samples=19 00:19:59.929 iops : min= 196, max= 444, avg=292.11, stdev=49.15, samples=19 00:19:59.929 lat (msec) : 4=0.10%, 20=0.94%, 50=42.59%, 100=55.25%, 250=1.11% 00:19:59.929 cpu : usr=38.43%, sys=2.96%, ctx=1201, majf=0, minf=9 00:19:59.929 IO depths : 1=0.1%, 2=1.1%, 4=4.4%, 8=79.0%, 16=15.5%, 32=0.0%, >=64=0.0% 00:19:59.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.929 complete : 0=0.0%, 4=88.2%, 8=10.8%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.929 issued rwts: total=2963,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:59.929 filename2: (groupid=0, jobs=1): err= 0: pid=83282: Mon Jul 15 22:31:12 2024 00:19:59.929 read: IOPS=297, BW=1190KiB/s (1219kB/s)(11.6MiB/10001msec) 00:19:59.929 slat (usec): min=4, max=12020, avg=31.09, stdev=374.43 00:19:59.929 clat (usec): min=1086, max=113194, avg=53627.17, stdev=17749.23 00:19:59.929 lat (usec): min=1093, max=113205, avg=53658.26, stdev=17755.45 00:19:59.929 clat percentiles (msec): 00:19:59.929 | 1.00th=[ 6], 5.00th=[ 27], 10.00th=[ 34], 20.00th=[ 39], 00:19:59.929 | 30.00th=[ 45], 40.00th=[ 49], 50.00th=[ 54], 60.00th=[ 58], 00:19:59.929 | 70.00th=[ 61], 80.00th=[ 67], 90.00th=[ 81], 95.00th=[ 86], 00:19:59.929 | 99.00th=[ 99], 99.50th=[ 99], 99.90th=[ 106], 99.95th=[ 113], 00:19:59.929 | 99.99th=[ 113] 00:19:59.929 bw ( KiB/s): min= 784, max= 1840, per=4.18%, avg=1160.95, stdev=204.80, samples=19 00:19:59.929 iops : min= 196, max= 460, avg=290.21, stdev=51.22, samples=19 00:19:59.929 lat (msec) : 2=0.24%, 4=0.50%, 10=0.34%, 20=0.97%, 50=41.50% 00:19:59.929 lat (msec) : 100=56.32%, 250=0.13% 00:19:59.929 cpu : usr=37.01%, sys=2.89%, ctx=1050, majf=0, minf=9 00:19:59.930 IO depths : 1=0.1%, 2=0.9%, 4=3.6%, 8=79.9%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:59.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.930 complete : 0=0.0%, 4=87.9%, 8=11.3%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.930 issued rwts: total=2976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:59.930 filename2: (groupid=0, jobs=1): err= 0: pid=83283: Mon Jul 15 22:31:12 2024 00:19:59.930 read: IOPS=288, BW=1156KiB/s (1183kB/s)(11.3MiB/10011msec) 00:19:59.930 slat (usec): min=3, max=8037, avg=21.94, stdev=257.98 00:19:59.930 clat (msec): min=12, max=120, avg=55.28, stdev=16.53 00:19:59.930 lat (msec): min=12, max=120, avg=55.31, stdev=16.54 00:19:59.930 clat percentiles (msec): 00:19:59.930 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 40], 00:19:59.930 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 55], 60.00th=[ 59], 00:19:59.930 | 70.00th=[ 61], 80.00th=[ 69], 90.00th=[ 81], 95.00th=[ 86], 00:19:59.930 | 99.00th=[ 96], 99.50th=[ 97], 99.90th=[ 115], 99.95th=[ 121], 00:19:59.930 | 99.99th=[ 121] 00:19:59.930 bw ( KiB/s): min= 784, max= 1784, per=4.14%, avg=1151.35, stdev=191.42, samples=20 00:19:59.930 iops : min= 196, max= 446, avg=287.80, stdev=47.86, samples=20 00:19:59.930 lat (msec) : 20=0.76%, 50=41.01%, 100=57.88%, 250=0.35% 00:19:59.930 cpu : usr=32.60%, sys=2.42%, ctx=994, majf=0, minf=9 00:19:59.930 IO depths : 1=0.1%, 2=1.1%, 4=4.5%, 8=78.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:59.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.930 complete : 0=0.0%, 4=88.6%, 8=10.4%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.930 issued rwts: total=2892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:59.930 00:19:59.930 Run status group 0 (all jobs): 00:19:59.930 READ: bw=27.1MiB/s (28.4MB/s), 1080KiB/s-1199KiB/s (1106kB/s-1228kB/s), io=272MiB (286MB), run=10001-10042msec 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.930 bdev_null0 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.930 [2024-07-15 22:31:12.373709] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.930 bdev_null1 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:59.930 22:31:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:59.930 { 00:19:59.930 "params": { 00:19:59.930 "name": "Nvme$subsystem", 00:19:59.930 "trtype": "$TEST_TRANSPORT", 00:19:59.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:59.930 "adrfam": "ipv4", 00:19:59.930 "trsvcid": "$NVMF_PORT", 00:19:59.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:59.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:59.930 "hdgst": ${hdgst:-false}, 00:19:59.931 "ddgst": ${ddgst:-false} 00:19:59.931 }, 00:19:59.931 "method": "bdev_nvme_attach_controller" 00:19:59.931 } 00:19:59.931 EOF 00:19:59.931 )") 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:59.931 { 00:19:59.931 "params": { 00:19:59.931 "name": "Nvme$subsystem", 00:19:59.931 "trtype": "$TEST_TRANSPORT", 00:19:59.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:59.931 "adrfam": "ipv4", 00:19:59.931 "trsvcid": "$NVMF_PORT", 00:19:59.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:59.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:59.931 "hdgst": ${hdgst:-false}, 00:19:59.931 "ddgst": ${ddgst:-false} 00:19:59.931 }, 00:19:59.931 "method": "bdev_nvme_attach_controller" 00:19:59.931 } 00:19:59.931 EOF 00:19:59.931 )") 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:59.931 "params": { 00:19:59.931 "name": "Nvme0", 00:19:59.931 "trtype": "tcp", 00:19:59.931 "traddr": "10.0.0.2", 00:19:59.931 "adrfam": "ipv4", 00:19:59.931 "trsvcid": "4420", 00:19:59.931 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:59.931 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:59.931 "hdgst": false, 00:19:59.931 "ddgst": false 00:19:59.931 }, 00:19:59.931 "method": "bdev_nvme_attach_controller" 00:19:59.931 },{ 00:19:59.931 "params": { 00:19:59.931 "name": "Nvme1", 00:19:59.931 "trtype": "tcp", 00:19:59.931 "traddr": "10.0.0.2", 00:19:59.931 "adrfam": "ipv4", 00:19:59.931 "trsvcid": "4420", 00:19:59.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:59.931 "hdgst": false, 00:19:59.931 "ddgst": false 00:19:59.931 }, 00:19:59.931 "method": "bdev_nvme_attach_controller" 00:19:59.931 }' 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:59.931 22:31:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:59.931 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:59.931 ... 00:19:59.931 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:59.931 ... 00:19:59.931 fio-3.35 00:19:59.931 Starting 4 threads 00:20:05.201 00:20:05.201 filename0: (groupid=0, jobs=1): err= 0: pid=83431: Mon Jul 15 22:31:18 2024 00:20:05.201 read: IOPS=2932, BW=22.9MiB/s (24.0MB/s)(115MiB/5002msec) 00:20:05.201 slat (nsec): min=5863, max=40558, avg=10906.16, stdev=3165.21 00:20:05.201 clat (usec): min=732, max=5229, avg=2699.65, stdev=721.04 00:20:05.201 lat (usec): min=739, max=5241, avg=2710.55, stdev=721.44 00:20:05.201 clat percentiles (usec): 00:20:05.201 | 1.00th=[ 1631], 5.00th=[ 1663], 10.00th=[ 1680], 20.00th=[ 1778], 00:20:05.201 | 30.00th=[ 1926], 40.00th=[ 2540], 50.00th=[ 3130], 60.00th=[ 3195], 00:20:05.201 | 70.00th=[ 3228], 80.00th=[ 3359], 90.00th=[ 3392], 95.00th=[ 3458], 00:20:05.201 | 99.00th=[ 3556], 99.50th=[ 3818], 99.90th=[ 3982], 99.95th=[ 4686], 00:20:05.201 | 99.99th=[ 4948] 00:20:05.201 bw ( KiB/s): min=19672, max=25120, per=26.97%, avg=23304.00, stdev=2579.49, samples=9 00:20:05.201 iops : min= 2459, max= 3140, avg=2913.00, stdev=322.44, samples=9 00:20:05.201 lat (usec) : 750=0.03%, 1000=0.38% 00:20:05.201 lat (msec) : 2=32.28%, 4=67.22%, 10=0.09% 00:20:05.201 cpu : usr=90.88%, sys=8.44%, ctx=7, majf=0, minf=9 00:20:05.201 IO depths : 1=0.1%, 2=6.1%, 4=60.5%, 8=33.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:05.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:05.201 complete : 0=0.0%, 4=97.7%, 8=2.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:05.201 issued rwts: total=14667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:05.201 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:05.201 filename0: (groupid=0, jobs=1): err= 0: pid=83432: Mon Jul 15 22:31:18 2024 00:20:05.201 read: IOPS=3190, BW=24.9MiB/s (26.1MB/s)(125MiB/5001msec) 00:20:05.201 slat (nsec): min=5876, max=39930, avg=9843.33, stdev=3058.87 00:20:05.201 clat (usec): min=589, max=5383, avg=2483.94, stdev=792.60 00:20:05.201 lat (usec): min=596, max=5396, avg=2493.79, stdev=792.07 00:20:05.201 clat percentiles (usec): 00:20:05.201 | 1.00th=[ 1037], 5.00th=[ 1057], 10.00th=[ 1647], 20.00th=[ 1680], 00:20:05.201 | 30.00th=[ 1909], 40.00th=[ 2245], 50.00th=[ 2474], 60.00th=[ 3097], 00:20:05.201 | 70.00th=[ 3130], 80.00th=[ 3359], 90.00th=[ 3392], 95.00th=[ 3458], 00:20:05.201 | 99.00th=[ 3556], 99.50th=[ 3752], 99.90th=[ 3916], 99.95th=[ 3982], 00:20:05.201 | 99.99th=[ 5276] 00:20:05.201 bw ( KiB/s): min=24864, max=27280, per=29.63%, avg=25596.44, stdev=891.41, samples=9 00:20:05.201 iops : min= 3108, max= 3410, avg=3199.56, stdev=111.43, samples=9 00:20:05.201 lat (usec) : 750=0.03%, 1000=0.45% 00:20:05.201 lat (msec) : 2=37.62%, 4=61.87%, 10=0.03% 00:20:05.201 cpu : usr=90.84%, sys=8.44%, ctx=17, majf=0, minf=0 00:20:05.201 IO depths : 1=0.1%, 2=0.2%, 4=63.7%, 8=36.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:05.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:05.201 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:05.201 issued rwts: total=15958,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:05.201 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:05.201 filename1: (groupid=0, jobs=1): err= 0: pid=83433: Mon Jul 15 22:31:18 2024 00:20:05.201 read: IOPS=2339, BW=18.3MiB/s (19.2MB/s)(91.4MiB/5002msec) 00:20:05.201 slat (nsec): min=6733, max=39284, avg=13449.95, stdev=2337.96 00:20:05.201 clat (usec): min=1212, max=4924, avg=3368.86, stdev=235.74 00:20:05.201 lat (usec): min=1229, max=4936, avg=3382.31, stdev=235.40 00:20:05.201 clat percentiles (usec): 00:20:05.201 | 1.00th=[ 2540], 5.00th=[ 3130], 10.00th=[ 3163], 20.00th=[ 3195], 00:20:05.201 | 30.00th=[ 3228], 40.00th=[ 3425], 50.00th=[ 3425], 60.00th=[ 3458], 00:20:05.201 | 70.00th=[ 3490], 80.00th=[ 3490], 90.00th=[ 3523], 95.00th=[ 3556], 00:20:05.201 | 99.00th=[ 4047], 99.50th=[ 4146], 99.90th=[ 4228], 99.95th=[ 4293], 00:20:05.201 | 99.99th=[ 4424] 00:20:05.201 bw ( KiB/s): min=18048, max=20064, per=21.75%, avg=18788.33, stdev=838.13, samples=9 00:20:05.201 iops : min= 2256, max= 2508, avg=2348.44, stdev=104.64, samples=9 00:20:05.201 lat (msec) : 2=0.48%, 4=98.08%, 10=1.44% 00:20:05.201 cpu : usr=90.72%, sys=8.72%, ctx=6, majf=0, minf=0 00:20:05.201 IO depths : 1=0.1%, 2=24.3%, 4=50.5%, 8=25.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:05.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:05.201 complete : 0=0.0%, 4=90.3%, 8=9.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:05.201 issued rwts: total=11701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:05.201 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:05.201 filename1: (groupid=0, jobs=1): err= 0: pid=83434: Mon Jul 15 22:31:18 2024 00:20:05.201 read: IOPS=2338, BW=18.3MiB/s (19.2MB/s)(91.4MiB/5001msec) 00:20:05.201 slat (nsec): min=6106, max=45723, avg=13493.87, stdev=2340.04 00:20:05.201 clat (usec): min=1192, max=4925, avg=3369.83, stdev=235.89 00:20:05.201 lat (usec): min=1217, max=4937, avg=3383.33, stdev=235.55 00:20:05.201 clat percentiles (usec): 00:20:05.201 | 1.00th=[ 2540], 5.00th=[ 3130], 10.00th=[ 3163], 20.00th=[ 3195], 00:20:05.201 | 30.00th=[ 3228], 40.00th=[ 3425], 50.00th=[ 3425], 60.00th=[ 3458], 00:20:05.201 | 70.00th=[ 3490], 80.00th=[ 3490], 90.00th=[ 3523], 95.00th=[ 3556], 00:20:05.201 | 99.00th=[ 4047], 99.50th=[ 4146], 99.90th=[ 4293], 99.95th=[ 4293], 00:20:05.201 | 99.99th=[ 4424] 00:20:05.201 bw ( KiB/s): min=18048, max=20176, per=21.74%, avg=18784.00, stdev=830.27, samples=9 00:20:05.201 iops : min= 2256, max= 2522, avg=2348.00, stdev=103.78, samples=9 00:20:05.201 lat (msec) : 2=0.48%, 4=98.00%, 10=1.52% 00:20:05.201 cpu : usr=91.36%, sys=8.08%, ctx=9, majf=0, minf=0 00:20:05.201 IO depths : 1=0.1%, 2=24.3%, 4=50.5%, 8=25.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:05.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:05.201 complete : 0=0.0%, 4=90.3%, 8=9.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:05.201 issued rwts: total=11694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:05.201 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:05.201 00:20:05.201 Run status group 0 (all jobs): 00:20:05.201 READ: bw=84.4MiB/s (88.5MB/s), 18.3MiB/s-24.9MiB/s (19.2MB/s-26.1MB/s), io=422MiB (443MB), run=5001-5002msec 00:20:05.201 22:31:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:05.201 22:31:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:05.201 22:31:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:05.201 22:31:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:05.201 22:31:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:05.201 22:31:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:05.201 22:31:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.201 22:31:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:05.201 22:31:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.201 22:31:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:05.201 22:31:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.201 22:31:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:05.201 22:31:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.201 22:31:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:05.201 22:31:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:05.201 22:31:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:05.201 22:31:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:05.201 22:31:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.201 22:31:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:05.201 22:31:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.201 22:31:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:05.201 22:31:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.201 22:31:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:05.201 22:31:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.201 ************************************ 00:20:05.201 END TEST fio_dif_rand_params 00:20:05.201 ************************************ 00:20:05.201 00:20:05.201 real 0m23.468s 00:20:05.201 user 2m1.724s 00:20:05.201 sys 0m10.452s 00:20:05.201 22:31:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:05.201 22:31:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:05.201 22:31:18 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:05.201 22:31:18 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:05.201 22:31:18 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:05.201 22:31:18 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:05.201 22:31:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:05.201 ************************************ 00:20:05.201 START TEST fio_dif_digest 00:20:05.201 ************************************ 00:20:05.201 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:20:05.201 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:20:05.201 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:05.201 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:20:05.201 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:20:05.201 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:05.201 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:20:05.201 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:20:05.201 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:05.202 bdev_null0 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:05.202 [2024-07-15 22:31:18.566507] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:05.202 { 00:20:05.202 "params": { 00:20:05.202 "name": "Nvme$subsystem", 00:20:05.202 "trtype": "$TEST_TRANSPORT", 00:20:05.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.202 "adrfam": "ipv4", 00:20:05.202 "trsvcid": "$NVMF_PORT", 00:20:05.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.202 "hdgst": ${hdgst:-false}, 00:20:05.202 "ddgst": ${ddgst:-false} 00:20:05.202 }, 00:20:05.202 "method": "bdev_nvme_attach_controller" 00:20:05.202 } 00:20:05.202 EOF 00:20:05.202 )") 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:05.202 "params": { 00:20:05.202 "name": "Nvme0", 00:20:05.202 "trtype": "tcp", 00:20:05.202 "traddr": "10.0.0.2", 00:20:05.202 "adrfam": "ipv4", 00:20:05.202 "trsvcid": "4420", 00:20:05.202 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:05.202 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:05.202 "hdgst": true, 00:20:05.202 "ddgst": true 00:20:05.202 }, 00:20:05.202 "method": "bdev_nvme_attach_controller" 00:20:05.202 }' 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:05.202 22:31:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:05.202 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:05.202 ... 00:20:05.202 fio-3.35 00:20:05.202 Starting 3 threads 00:20:17.401 00:20:17.401 filename0: (groupid=0, jobs=1): err= 0: pid=83541: Mon Jul 15 22:31:29 2024 00:20:17.401 read: IOPS=291, BW=36.4MiB/s (38.2MB/s)(365MiB/10003msec) 00:20:17.401 slat (usec): min=6, max=121, avg= 9.38, stdev= 4.64 00:20:17.401 clat (usec): min=10117, max=11172, avg=10267.74, stdev=87.12 00:20:17.401 lat (usec): min=10125, max=11210, avg=10277.12, stdev=87.76 00:20:17.401 clat percentiles (usec): 00:20:17.401 | 1.00th=[10159], 5.00th=[10159], 10.00th=[10159], 20.00th=[10290], 00:20:17.401 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10290], 60.00th=[10290], 00:20:17.401 | 70.00th=[10290], 80.00th=[10290], 90.00th=[10290], 95.00th=[10421], 00:20:17.401 | 99.00th=[10683], 99.50th=[10814], 99.90th=[11207], 99.95th=[11207], 00:20:17.401 | 99.99th=[11207] 00:20:17.401 bw ( KiB/s): min=36864, max=37632, per=33.36%, avg=37345.05, stdev=377.87, samples=19 00:20:17.401 iops : min= 288, max= 294, avg=291.74, stdev= 2.94, samples=19 00:20:17.401 lat (msec) : 20=100.00% 00:20:17.401 cpu : usr=89.01%, sys=10.29%, ctx=61, majf=0, minf=0 00:20:17.401 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:17.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.401 issued rwts: total=2916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.401 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:17.401 filename0: (groupid=0, jobs=1): err= 0: pid=83542: Mon Jul 15 22:31:29 2024 00:20:17.401 read: IOPS=291, BW=36.4MiB/s (38.2MB/s)(365MiB/10001msec) 00:20:17.401 slat (nsec): min=6247, max=52484, avg=9273.69, stdev=3550.74 00:20:17.401 clat (usec): min=7874, max=12445, avg=10266.02, stdev=131.81 00:20:17.401 lat (usec): min=7881, max=12474, avg=10275.29, stdev=132.10 00:20:17.401 clat percentiles (usec): 00:20:17.401 | 1.00th=[10159], 5.00th=[10159], 10.00th=[10159], 20.00th=[10290], 00:20:17.401 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10290], 60.00th=[10290], 00:20:17.401 | 70.00th=[10290], 80.00th=[10290], 90.00th=[10290], 95.00th=[10421], 00:20:17.401 | 99.00th=[10683], 99.50th=[10814], 99.90th=[12387], 99.95th=[12387], 00:20:17.401 | 99.99th=[12387] 00:20:17.401 bw ( KiB/s): min=36864, max=37707, per=33.33%, avg=37312.58, stdev=393.39, samples=19 00:20:17.401 iops : min= 288, max= 294, avg=291.47, stdev= 3.04, samples=19 00:20:17.401 lat (msec) : 10=0.10%, 20=99.90% 00:20:17.401 cpu : usr=89.87%, sys=9.69%, ctx=17, majf=0, minf=0 00:20:17.401 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:17.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.401 issued rwts: total=2916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.401 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:17.401 filename0: (groupid=0, jobs=1): err= 0: pid=83543: Mon Jul 15 22:31:29 2024 00:20:17.401 read: IOPS=291, BW=36.4MiB/s (38.2MB/s)(365MiB/10002msec) 00:20:17.401 slat (nsec): min=6183, max=58526, avg=8911.12, stdev=3043.88 00:20:17.401 clat (usec): min=9103, max=11655, avg=10267.86, stdev=100.23 00:20:17.401 lat (usec): min=9110, max=11713, avg=10276.77, stdev=100.73 00:20:17.401 clat percentiles (usec): 00:20:17.401 | 1.00th=[10159], 5.00th=[10159], 10.00th=[10159], 20.00th=[10290], 00:20:17.401 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10290], 60.00th=[10290], 00:20:17.401 | 70.00th=[10290], 80.00th=[10290], 90.00th=[10290], 95.00th=[10421], 00:20:17.401 | 99.00th=[10683], 99.50th=[10814], 99.90th=[11600], 99.95th=[11600], 00:20:17.401 | 99.99th=[11600] 00:20:17.401 bw ( KiB/s): min=36864, max=37707, per=33.33%, avg=37312.58, stdev=393.39, samples=19 00:20:17.401 iops : min= 288, max= 294, avg=291.47, stdev= 3.04, samples=19 00:20:17.401 lat (msec) : 10=0.10%, 20=99.90% 00:20:17.401 cpu : usr=89.49%, sys=10.11%, ctx=8, majf=0, minf=0 00:20:17.401 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:17.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.401 issued rwts: total=2916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.401 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:17.401 00:20:17.401 Run status group 0 (all jobs): 00:20:17.401 READ: bw=109MiB/s (115MB/s), 36.4MiB/s-36.4MiB/s (38.2MB/s-38.2MB/s), io=1094MiB (1147MB), run=10001-10003msec 00:20:17.401 22:31:29 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:17.401 22:31:29 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:20:17.401 22:31:29 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:20:17.401 22:31:29 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:17.401 22:31:29 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:20:17.401 22:31:29 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:17.401 22:31:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.401 22:31:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:17.401 22:31:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.401 22:31:29 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:17.401 22:31:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.401 22:31:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:17.401 22:31:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.401 ************************************ 00:20:17.401 END TEST fio_dif_digest 00:20:17.401 ************************************ 00:20:17.401 00:20:17.401 real 0m10.981s 00:20:17.401 user 0m27.455s 00:20:17.401 sys 0m3.310s 00:20:17.401 22:31:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:17.401 22:31:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:17.401 22:31:29 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:17.401 22:31:29 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:17.401 22:31:29 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:20:17.401 22:31:29 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:17.401 22:31:29 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:20:17.401 22:31:29 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:17.401 22:31:29 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:20:17.401 22:31:29 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:17.401 22:31:29 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:17.401 rmmod nvme_tcp 00:20:17.401 rmmod nvme_fabrics 00:20:17.401 rmmod nvme_keyring 00:20:17.401 22:31:29 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:17.401 22:31:29 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:20:17.401 22:31:29 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:20:17.401 22:31:29 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 82776 ']' 00:20:17.402 22:31:29 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 82776 00:20:17.402 22:31:29 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 82776 ']' 00:20:17.402 22:31:29 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 82776 00:20:17.402 22:31:29 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:20:17.402 22:31:29 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:17.402 22:31:29 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82776 00:20:17.402 killing process with pid 82776 00:20:17.402 22:31:29 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:17.402 22:31:29 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:17.402 22:31:29 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82776' 00:20:17.402 22:31:29 nvmf_dif -- common/autotest_common.sh@967 -- # kill 82776 00:20:17.402 22:31:29 nvmf_dif -- common/autotest_common.sh@972 -- # wait 82776 00:20:17.402 22:31:29 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:20:17.402 22:31:29 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:17.402 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:17.402 Waiting for block devices as requested 00:20:17.402 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:17.402 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:17.402 22:31:30 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:17.402 22:31:30 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:17.402 22:31:30 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:17.402 22:31:30 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:17.402 22:31:30 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.402 22:31:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:17.402 22:31:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.402 22:31:30 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:17.402 00:20:17.402 real 1m0.245s 00:20:17.402 user 3m45.189s 00:20:17.402 sys 0m23.495s 00:20:17.402 22:31:30 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:17.402 22:31:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:17.402 ************************************ 00:20:17.402 END TEST nvmf_dif 00:20:17.402 ************************************ 00:20:17.402 22:31:30 -- common/autotest_common.sh@1142 -- # return 0 00:20:17.402 22:31:30 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:17.402 22:31:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:17.402 22:31:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:17.402 22:31:30 -- common/autotest_common.sh@10 -- # set +x 00:20:17.402 ************************************ 00:20:17.402 START TEST nvmf_abort_qd_sizes 00:20:17.402 ************************************ 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:17.402 * Looking for test storage... 00:20:17.402 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:17.402 22:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:17.402 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:17.402 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:17.402 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:17.402 Cannot find device "nvmf_tgt_br" 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:17.660 Cannot find device "nvmf_tgt_br2" 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:17.660 Cannot find device "nvmf_tgt_br" 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:17.660 Cannot find device "nvmf_tgt_br2" 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:17.660 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:17.660 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:17.660 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:17.918 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:17.918 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:17.918 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:17.918 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:17.918 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:17.918 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:17.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:20:17.918 00:20:17.918 --- 10.0.0.2 ping statistics --- 00:20:17.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.918 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:20:17.918 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:17.918 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:17.918 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:20:17.918 00:20:17.918 --- 10.0.0.3 ping statistics --- 00:20:17.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.918 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:20:17.918 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:17.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:20:17.918 00:20:17.918 --- 10.0.0.1 ping statistics --- 00:20:17.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.918 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:17.918 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.918 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:20:17.918 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:20:17.918 22:31:31 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:18.483 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:18.741 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:18.741 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:18.741 22:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.741 22:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:18.741 22:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:18.741 22:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.741 22:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:18.741 22:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:18.999 22:31:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:20:18.999 22:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:18.999 22:31:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:18.999 22:31:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:18.999 22:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=84145 00:20:18.999 22:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:20:18.999 22:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 84145 00:20:18.999 22:31:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 84145 ']' 00:20:18.999 22:31:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.999 22:31:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.999 22:31:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.999 22:31:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.999 22:31:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:18.999 [2024-07-15 22:31:32.466797] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:20:18.999 [2024-07-15 22:31:32.466861] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.999 [2024-07-15 22:31:32.611655] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:19.256 [2024-07-15 22:31:32.711358] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.256 [2024-07-15 22:31:32.711410] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.256 [2024-07-15 22:31:32.711420] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.256 [2024-07-15 22:31:32.711429] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.256 [2024-07-15 22:31:32.711435] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.256 [2024-07-15 22:31:32.711668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.256 [2024-07-15 22:31:32.711921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.256 [2024-07-15 22:31:32.712506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.256 [2024-07-15 22:31:32.712507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:19.256 [2024-07-15 22:31:32.754989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:19.820 22:31:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:19.820 22:31:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:20:19.820 22:31:33 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:19.820 22:31:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:19.820 22:31:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:19.820 22:31:33 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.820 22:31:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:20:19.820 22:31:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:20:19.820 22:31:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:20:19.820 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:20:19.820 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:20:19.820 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:20:19.820 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:20:19.820 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:20:19.820 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:19.821 22:31:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:19.821 ************************************ 00:20:19.821 START TEST spdk_target_abort 00:20:19.821 ************************************ 00:20:19.821 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:20:19.821 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:20:19.821 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:20:19.821 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.821 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:20.089 spdk_targetn1 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:20.089 [2024-07-15 22:31:33.503913] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:20.089 [2024-07-15 22:31:33.532010] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:20.089 22:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:23.378 Initializing NVMe Controllers 00:20:23.378 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:23.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:23.378 Initialization complete. Launching workers. 00:20:23.378 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13084, failed: 0 00:20:23.378 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1087, failed to submit 11997 00:20:23.378 success 723, unsuccess 364, failed 0 00:20:23.378 22:31:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:23.378 22:31:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:26.658 Initializing NVMe Controllers 00:20:26.658 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:26.658 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:26.658 Initialization complete. Launching workers. 00:20:26.658 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9000, failed: 0 00:20:26.658 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1172, failed to submit 7828 00:20:26.658 success 349, unsuccess 823, failed 0 00:20:26.658 22:31:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:26.658 22:31:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:29.955 Initializing NVMe Controllers 00:20:29.955 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:29.955 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:29.955 Initialization complete. Launching workers. 00:20:29.955 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35212, failed: 0 00:20:29.955 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2416, failed to submit 32796 00:20:29.955 success 605, unsuccess 1811, failed 0 00:20:29.955 22:31:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:20:29.955 22:31:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.955 22:31:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:29.955 22:31:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.955 22:31:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:20:29.955 22:31:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.955 22:31:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:30.888 22:31:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.888 22:31:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84145 00:20:30.888 22:31:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 84145 ']' 00:20:30.888 22:31:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 84145 00:20:30.888 22:31:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:20:30.888 22:31:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:30.888 22:31:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84145 00:20:30.888 killing process with pid 84145 00:20:30.888 22:31:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:30.888 22:31:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:30.888 22:31:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84145' 00:20:30.888 22:31:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 84145 00:20:30.888 22:31:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 84145 00:20:30.888 00:20:30.888 real 0m11.058s 00:20:30.888 user 0m43.915s 00:20:30.888 sys 0m2.736s 00:20:30.888 22:31:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:30.888 ************************************ 00:20:30.888 END TEST spdk_target_abort 00:20:30.888 ************************************ 00:20:30.888 22:31:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:31.147 22:31:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:20:31.147 22:31:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:20:31.147 22:31:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:31.147 22:31:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:31.147 22:31:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:31.147 ************************************ 00:20:31.147 START TEST kernel_target_abort 00:20:31.147 ************************************ 00:20:31.147 22:31:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:20:31.147 22:31:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:20:31.147 22:31:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:20:31.147 22:31:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:31.147 22:31:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:31.147 22:31:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.147 22:31:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.147 22:31:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:31.147 22:31:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.147 22:31:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:31.147 22:31:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:31.147 22:31:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:31.147 22:31:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:31.147 22:31:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:31.147 22:31:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:20:31.147 22:31:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:31.147 22:31:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:31.147 22:31:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:31.147 22:31:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:20:31.147 22:31:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:20:31.147 22:31:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:20:31.147 22:31:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:31.147 22:31:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:31.729 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:31.729 Waiting for block devices as requested 00:20:31.729 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:31.729 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:31.988 No valid GPT data, bailing 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:31.988 No valid GPT data, bailing 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:31.988 No valid GPT data, bailing 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:20:31.988 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:32.246 No valid GPT data, bailing 00:20:32.246 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:32.246 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:32.246 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:32.246 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc --hostid=37374fe9-a847-4b40-94af-b766955abedc -a 10.0.0.1 -t tcp -s 4420 00:20:32.247 00:20:32.247 Discovery Log Number of Records 2, Generation counter 2 00:20:32.247 =====Discovery Log Entry 0====== 00:20:32.247 trtype: tcp 00:20:32.247 adrfam: ipv4 00:20:32.247 subtype: current discovery subsystem 00:20:32.247 treq: not specified, sq flow control disable supported 00:20:32.247 portid: 1 00:20:32.247 trsvcid: 4420 00:20:32.247 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:32.247 traddr: 10.0.0.1 00:20:32.247 eflags: none 00:20:32.247 sectype: none 00:20:32.247 =====Discovery Log Entry 1====== 00:20:32.247 trtype: tcp 00:20:32.247 adrfam: ipv4 00:20:32.247 subtype: nvme subsystem 00:20:32.247 treq: not specified, sq flow control disable supported 00:20:32.247 portid: 1 00:20:32.247 trsvcid: 4420 00:20:32.247 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:32.247 traddr: 10.0.0.1 00:20:32.247 eflags: none 00:20:32.247 sectype: none 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:32.247 22:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:35.532 Initializing NVMe Controllers 00:20:35.532 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:35.532 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:35.532 Initialization complete. Launching workers. 00:20:35.532 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35824, failed: 0 00:20:35.532 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35824, failed to submit 0 00:20:35.532 success 0, unsuccess 35824, failed 0 00:20:35.532 22:31:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:35.532 22:31:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:38.815 Initializing NVMe Controllers 00:20:38.815 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:38.815 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:38.815 Initialization complete. Launching workers. 00:20:38.815 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 75717, failed: 0 00:20:38.815 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37565, failed to submit 38152 00:20:38.815 success 0, unsuccess 37565, failed 0 00:20:38.815 22:31:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:38.815 22:31:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:42.113 Initializing NVMe Controllers 00:20:42.113 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:42.113 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:42.113 Initialization complete. Launching workers. 00:20:42.113 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 89553, failed: 0 00:20:42.113 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22364, failed to submit 67189 00:20:42.113 success 0, unsuccess 22364, failed 0 00:20:42.113 22:31:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:20:42.113 22:31:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:42.113 22:31:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:20:42.113 22:31:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:42.113 22:31:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:42.113 22:31:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:42.113 22:31:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:42.113 22:31:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:20:42.113 22:31:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:20:42.113 22:31:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:42.680 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:45.230 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:45.230 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:45.230 00:20:45.230 real 0m14.023s 00:20:45.230 user 0m6.104s 00:20:45.230 sys 0m5.181s 00:20:45.230 22:31:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:45.230 22:31:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:45.230 ************************************ 00:20:45.230 END TEST kernel_target_abort 00:20:45.230 ************************************ 00:20:45.230 22:31:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:20:45.230 22:31:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:45.230 22:31:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:20:45.230 22:31:58 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:45.230 22:31:58 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:20:45.230 22:31:58 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:45.231 22:31:58 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:20:45.231 22:31:58 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:45.231 22:31:58 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:45.231 rmmod nvme_tcp 00:20:45.231 rmmod nvme_fabrics 00:20:45.231 rmmod nvme_keyring 00:20:45.231 22:31:58 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:45.231 22:31:58 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:20:45.231 22:31:58 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:20:45.231 22:31:58 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 84145 ']' 00:20:45.231 22:31:58 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 84145 00:20:45.231 22:31:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 84145 ']' 00:20:45.231 22:31:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 84145 00:20:45.231 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (84145) - No such process 00:20:45.231 Process with pid 84145 is not found 00:20:45.231 22:31:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 84145 is not found' 00:20:45.231 22:31:58 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:20:45.231 22:31:58 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:45.798 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:45.798 Waiting for block devices as requested 00:20:45.798 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:46.057 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:46.057 22:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:46.057 22:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:46.057 22:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:46.057 22:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:46.057 22:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.057 22:31:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:46.057 22:31:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.057 22:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:46.057 00:20:46.057 real 0m28.763s 00:20:46.057 user 0m51.234s 00:20:46.057 sys 0m9.695s 00:20:46.057 22:31:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:46.057 22:31:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:46.057 ************************************ 00:20:46.057 END TEST nvmf_abort_qd_sizes 00:20:46.057 ************************************ 00:20:46.057 22:31:59 -- common/autotest_common.sh@1142 -- # return 0 00:20:46.057 22:31:59 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:20:46.057 22:31:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:46.057 22:31:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:46.057 22:31:59 -- common/autotest_common.sh@10 -- # set +x 00:20:46.057 ************************************ 00:20:46.057 START TEST keyring_file 00:20:46.057 ************************************ 00:20:46.057 22:31:59 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:20:46.315 * Looking for test storage... 00:20:46.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:20:46.315 22:31:59 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:20:46.315 22:31:59 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:46.315 22:31:59 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.315 22:31:59 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.315 22:31:59 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.315 22:31:59 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.315 22:31:59 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.315 22:31:59 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.315 22:31:59 keyring_file -- paths/export.sh@5 -- # export PATH 00:20:46.315 22:31:59 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@47 -- # : 0 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:46.315 22:31:59 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:46.315 22:31:59 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:20:46.315 22:31:59 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:20:46.315 22:31:59 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:20:46.315 22:31:59 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:20:46.315 22:31:59 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:20:46.315 22:31:59 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:20:46.315 22:31:59 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:20:46.315 22:31:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:46.315 22:31:59 keyring_file -- keyring/common.sh@17 -- # name=key0 00:20:46.315 22:31:59 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:46.315 22:31:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:46.315 22:31:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:46.315 22:31:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.tvXveuYtG6 00:20:46.315 22:31:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:46.316 22:31:59 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:46.316 22:31:59 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:20:46.316 22:31:59 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:46.316 22:31:59 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:46.316 22:31:59 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:20:46.316 22:31:59 keyring_file -- nvmf/common.sh@705 -- # python - 00:20:46.316 22:31:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.tvXveuYtG6 00:20:46.316 22:31:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.tvXveuYtG6 00:20:46.316 22:31:59 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.tvXveuYtG6 00:20:46.316 22:31:59 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:20:46.316 22:31:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:46.316 22:31:59 keyring_file -- keyring/common.sh@17 -- # name=key1 00:20:46.316 22:31:59 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:20:46.316 22:31:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:46.316 22:31:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:46.316 22:31:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.J7ZKgGthup 00:20:46.316 22:31:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:20:46.316 22:31:59 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:20:46.316 22:31:59 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:20:46.316 22:31:59 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:46.316 22:31:59 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:20:46.316 22:31:59 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:20:46.316 22:31:59 keyring_file -- nvmf/common.sh@705 -- # python - 00:20:46.316 22:31:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.J7ZKgGthup 00:20:46.316 22:31:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.J7ZKgGthup 00:20:46.316 22:31:59 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.J7ZKgGthup 00:20:46.316 22:31:59 keyring_file -- keyring/file.sh@30 -- # tgtpid=85017 00:20:46.316 22:31:59 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85017 00:20:46.316 22:31:59 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85017 ']' 00:20:46.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.316 22:31:59 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.316 22:31:59 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:46.316 22:31:59 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.316 22:31:59 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:46.316 22:31:59 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:46.316 22:31:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:46.573 [2024-07-15 22:31:59.986651] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:20:46.573 [2024-07-15 22:31:59.986726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85017 ] 00:20:46.573 [2024-07-15 22:32:00.130194] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.873 [2024-07-15 22:32:00.224389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.873 [2024-07-15 22:32:00.265266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:47.455 22:32:00 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:47.455 22:32:00 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:20:47.455 22:32:00 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:20:47.455 22:32:00 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.455 22:32:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:47.455 [2024-07-15 22:32:00.840737] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.455 null0 00:20:47.455 [2024-07-15 22:32:00.872692] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:47.455 [2024-07-15 22:32:00.872897] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:47.455 [2024-07-15 22:32:00.880672] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:47.455 22:32:00 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.455 22:32:00 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:47.455 22:32:00 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:20:47.455 22:32:00 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:47.455 22:32:00 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:47.455 22:32:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:47.455 22:32:00 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:47.455 22:32:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:47.455 22:32:00 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:47.455 22:32:00 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.455 22:32:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:47.455 [2024-07-15 22:32:00.896645] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:20:47.455 request: 00:20:47.455 { 00:20:47.455 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:20:47.455 "secure_channel": false, 00:20:47.455 "listen_address": { 00:20:47.455 "trtype": "tcp", 00:20:47.455 "traddr": "127.0.0.1", 00:20:47.455 "trsvcid": "4420" 00:20:47.455 }, 00:20:47.455 "method": "nvmf_subsystem_add_listener", 00:20:47.455 "req_id": 1 00:20:47.455 } 00:20:47.455 Got JSON-RPC error response 00:20:47.455 response: 00:20:47.455 { 00:20:47.455 "code": -32602, 00:20:47.455 "message": "Invalid parameters" 00:20:47.455 } 00:20:47.455 22:32:00 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:47.455 22:32:00 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:20:47.455 22:32:00 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:47.455 22:32:00 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:47.455 22:32:00 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:47.455 22:32:00 keyring_file -- keyring/file.sh@46 -- # bperfpid=85034 00:20:47.455 22:32:00 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:20:47.455 22:32:00 keyring_file -- keyring/file.sh@48 -- # waitforlisten 85034 /var/tmp/bperf.sock 00:20:47.455 22:32:00 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85034 ']' 00:20:47.455 22:32:00 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:47.455 22:32:00 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:47.455 22:32:00 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:47.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:47.455 22:32:00 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:47.455 22:32:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:47.455 [2024-07-15 22:32:00.961619] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:20:47.455 [2024-07-15 22:32:00.961688] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85034 ] 00:20:47.714 [2024-07-15 22:32:01.105101] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.714 [2024-07-15 22:32:01.203386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.714 [2024-07-15 22:32:01.245388] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:48.283 22:32:01 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:48.283 22:32:01 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:20:48.283 22:32:01 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tvXveuYtG6 00:20:48.283 22:32:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tvXveuYtG6 00:20:48.541 22:32:01 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.J7ZKgGthup 00:20:48.541 22:32:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.J7ZKgGthup 00:20:48.541 22:32:02 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:20:48.541 22:32:02 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:20:48.541 22:32:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:48.541 22:32:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:48.541 22:32:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:48.799 22:32:02 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.tvXveuYtG6 == \/\t\m\p\/\t\m\p\.\t\v\X\v\e\u\Y\t\G\6 ]] 00:20:48.799 22:32:02 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:20:48.799 22:32:02 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:20:48.799 22:32:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:48.799 22:32:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:48.799 22:32:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:49.056 22:32:02 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.J7ZKgGthup == \/\t\m\p\/\t\m\p\.\J\7\Z\K\g\G\t\h\u\p ]] 00:20:49.056 22:32:02 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:20:49.056 22:32:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:49.056 22:32:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:49.056 22:32:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:49.056 22:32:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:49.056 22:32:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:49.313 22:32:02 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:20:49.313 22:32:02 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:20:49.313 22:32:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:49.313 22:32:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:49.313 22:32:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:49.313 22:32:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:49.314 22:32:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:49.570 22:32:02 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:20:49.570 22:32:02 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:49.570 22:32:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:49.571 [2024-07-15 22:32:03.129789] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:49.571 nvme0n1 00:20:49.827 22:32:03 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:20:49.827 22:32:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:49.827 22:32:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:49.827 22:32:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:49.827 22:32:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:49.827 22:32:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:49.827 22:32:03 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:20:49.827 22:32:03 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:20:49.827 22:32:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:49.827 22:32:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:49.827 22:32:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:49.827 22:32:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:49.827 22:32:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:50.083 22:32:03 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:20:50.083 22:32:03 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:50.083 Running I/O for 1 seconds... 00:20:51.082 00:20:51.082 Latency(us) 00:20:51.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.082 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:20:51.082 nvme0n1 : 1.00 16233.73 63.41 0.00 0.00 7863.46 4448.03 18634.33 00:20:51.082 =================================================================================================================== 00:20:51.082 Total : 16233.73 63.41 0.00 0.00 7863.46 4448.03 18634.33 00:20:51.082 0 00:20:51.082 22:32:04 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:20:51.082 22:32:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:20:51.339 22:32:04 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:20:51.339 22:32:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:51.339 22:32:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:51.339 22:32:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:51.339 22:32:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:51.339 22:32:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:51.597 22:32:05 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:20:51.597 22:32:05 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:20:51.597 22:32:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:51.597 22:32:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:51.597 22:32:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:51.597 22:32:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:51.597 22:32:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:51.855 22:32:05 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:20:51.855 22:32:05 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:51.855 22:32:05 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:20:51.855 22:32:05 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:51.855 22:32:05 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:20:51.855 22:32:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.855 22:32:05 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:20:51.855 22:32:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.855 22:32:05 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:51.855 22:32:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:52.114 [2024-07-15 22:32:05.558101] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:52.114 [2024-07-15 22:32:05.558693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1109710 (107): Transport endpoint is not connected 00:20:52.114 [2024-07-15 22:32:05.559680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1109710 (9): Bad file descriptor 00:20:52.114 [2024-07-15 22:32:05.560680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:52.114 [2024-07-15 22:32:05.560703] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:20:52.114 [2024-07-15 22:32:05.560718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:52.114 request: 00:20:52.114 { 00:20:52.114 "name": "nvme0", 00:20:52.114 "trtype": "tcp", 00:20:52.114 "traddr": "127.0.0.1", 00:20:52.114 "adrfam": "ipv4", 00:20:52.114 "trsvcid": "4420", 00:20:52.114 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:52.114 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:52.114 "prchk_reftag": false, 00:20:52.114 "prchk_guard": false, 00:20:52.114 "hdgst": false, 00:20:52.114 "ddgst": false, 00:20:52.114 "psk": "key1", 00:20:52.114 "method": "bdev_nvme_attach_controller", 00:20:52.114 "req_id": 1 00:20:52.114 } 00:20:52.114 Got JSON-RPC error response 00:20:52.114 response: 00:20:52.114 { 00:20:52.114 "code": -5, 00:20:52.114 "message": "Input/output error" 00:20:52.114 } 00:20:52.114 22:32:05 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:20:52.114 22:32:05 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:52.114 22:32:05 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:52.114 22:32:05 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:52.114 22:32:05 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:20:52.114 22:32:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:52.114 22:32:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:52.114 22:32:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:52.114 22:32:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:52.114 22:32:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:52.373 22:32:05 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:20:52.373 22:32:05 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:20:52.373 22:32:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:52.373 22:32:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:52.373 22:32:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:52.373 22:32:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:52.373 22:32:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:52.632 22:32:06 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:20:52.632 22:32:06 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:20:52.632 22:32:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:20:52.632 22:32:06 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:20:52.632 22:32:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:20:52.891 22:32:06 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:20:52.891 22:32:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:52.891 22:32:06 keyring_file -- keyring/file.sh@77 -- # jq length 00:20:53.150 22:32:06 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:20:53.150 22:32:06 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.tvXveuYtG6 00:20:53.150 22:32:06 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.tvXveuYtG6 00:20:53.150 22:32:06 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:20:53.150 22:32:06 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.tvXveuYtG6 00:20:53.150 22:32:06 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:20:53.150 22:32:06 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:53.150 22:32:06 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:20:53.150 22:32:06 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:53.150 22:32:06 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tvXveuYtG6 00:20:53.150 22:32:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tvXveuYtG6 00:20:53.408 [2024-07-15 22:32:06.797465] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.tvXveuYtG6': 0100660 00:20:53.408 [2024-07-15 22:32:06.797511] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:53.408 request: 00:20:53.408 { 00:20:53.408 "name": "key0", 00:20:53.408 "path": "/tmp/tmp.tvXveuYtG6", 00:20:53.408 "method": "keyring_file_add_key", 00:20:53.408 "req_id": 1 00:20:53.408 } 00:20:53.408 Got JSON-RPC error response 00:20:53.408 response: 00:20:53.408 { 00:20:53.408 "code": -1, 00:20:53.408 "message": "Operation not permitted" 00:20:53.408 } 00:20:53.408 22:32:06 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:20:53.408 22:32:06 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:53.408 22:32:06 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:53.408 22:32:06 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:53.408 22:32:06 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.tvXveuYtG6 00:20:53.408 22:32:06 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tvXveuYtG6 00:20:53.408 22:32:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tvXveuYtG6 00:20:53.408 22:32:07 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.tvXveuYtG6 00:20:53.408 22:32:07 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:20:53.408 22:32:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:53.408 22:32:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:53.408 22:32:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:53.408 22:32:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:53.408 22:32:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:53.667 22:32:07 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:20:53.667 22:32:07 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:53.667 22:32:07 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:20:53.667 22:32:07 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:53.667 22:32:07 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:20:53.667 22:32:07 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:53.667 22:32:07 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:20:53.667 22:32:07 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:53.667 22:32:07 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:53.667 22:32:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:53.941 [2024-07-15 22:32:07.421538] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.tvXveuYtG6': No such file or directory 00:20:53.941 [2024-07-15 22:32:07.421585] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:20:53.941 [2024-07-15 22:32:07.421619] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:20:53.941 [2024-07-15 22:32:07.421628] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:53.941 [2024-07-15 22:32:07.421637] bdev_nvme.c:6273:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:20:53.941 request: 00:20:53.941 { 00:20:53.941 "name": "nvme0", 00:20:53.941 "trtype": "tcp", 00:20:53.941 "traddr": "127.0.0.1", 00:20:53.941 "adrfam": "ipv4", 00:20:53.941 "trsvcid": "4420", 00:20:53.941 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:53.941 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:53.941 "prchk_reftag": false, 00:20:53.941 "prchk_guard": false, 00:20:53.941 "hdgst": false, 00:20:53.941 "ddgst": false, 00:20:53.941 "psk": "key0", 00:20:53.941 "method": "bdev_nvme_attach_controller", 00:20:53.941 "req_id": 1 00:20:53.941 } 00:20:53.941 Got JSON-RPC error response 00:20:53.941 response: 00:20:53.941 { 00:20:53.941 "code": -19, 00:20:53.941 "message": "No such device" 00:20:53.941 } 00:20:53.941 22:32:07 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:20:53.941 22:32:07 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:53.941 22:32:07 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:53.941 22:32:07 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:53.941 22:32:07 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:20:53.941 22:32:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:20:54.199 22:32:07 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:20:54.199 22:32:07 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:54.199 22:32:07 keyring_file -- keyring/common.sh@17 -- # name=key0 00:20:54.199 22:32:07 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:54.199 22:32:07 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:54.199 22:32:07 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:54.199 22:32:07 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.c5w6xkv7tJ 00:20:54.199 22:32:07 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:54.199 22:32:07 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:54.199 22:32:07 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:20:54.199 22:32:07 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:54.199 22:32:07 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:54.199 22:32:07 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:20:54.199 22:32:07 keyring_file -- nvmf/common.sh@705 -- # python - 00:20:54.199 22:32:07 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.c5w6xkv7tJ 00:20:54.199 22:32:07 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.c5w6xkv7tJ 00:20:54.199 22:32:07 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.c5w6xkv7tJ 00:20:54.199 22:32:07 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.c5w6xkv7tJ 00:20:54.199 22:32:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.c5w6xkv7tJ 00:20:54.457 22:32:07 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:54.457 22:32:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:54.716 nvme0n1 00:20:54.716 22:32:08 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:20:54.716 22:32:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:54.716 22:32:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:54.716 22:32:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:54.716 22:32:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:54.716 22:32:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:54.974 22:32:08 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:20:54.974 22:32:08 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:20:54.974 22:32:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:20:54.974 22:32:08 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:20:54.974 22:32:08 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:20:54.974 22:32:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:54.974 22:32:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:54.974 22:32:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:55.232 22:32:08 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:20:55.232 22:32:08 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:20:55.232 22:32:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:55.232 22:32:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:55.232 22:32:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:55.232 22:32:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:55.232 22:32:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:55.489 22:32:08 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:20:55.489 22:32:08 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:20:55.489 22:32:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:20:55.748 22:32:09 keyring_file -- keyring/file.sh@104 -- # jq length 00:20:55.748 22:32:09 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:20:55.748 22:32:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:55.748 22:32:09 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:20:55.748 22:32:09 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.c5w6xkv7tJ 00:20:55.748 22:32:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.c5w6xkv7tJ 00:20:56.006 22:32:09 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.J7ZKgGthup 00:20:56.006 22:32:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.J7ZKgGthup 00:20:56.265 22:32:09 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:56.265 22:32:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:56.524 nvme0n1 00:20:56.524 22:32:10 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:20:56.524 22:32:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:20:56.783 22:32:10 keyring_file -- keyring/file.sh@112 -- # config='{ 00:20:56.783 "subsystems": [ 00:20:56.783 { 00:20:56.783 "subsystem": "keyring", 00:20:56.783 "config": [ 00:20:56.783 { 00:20:56.783 "method": "keyring_file_add_key", 00:20:56.783 "params": { 00:20:56.783 "name": "key0", 00:20:56.783 "path": "/tmp/tmp.c5w6xkv7tJ" 00:20:56.783 } 00:20:56.783 }, 00:20:56.783 { 00:20:56.783 "method": "keyring_file_add_key", 00:20:56.783 "params": { 00:20:56.783 "name": "key1", 00:20:56.783 "path": "/tmp/tmp.J7ZKgGthup" 00:20:56.783 } 00:20:56.783 } 00:20:56.783 ] 00:20:56.783 }, 00:20:56.783 { 00:20:56.783 "subsystem": "iobuf", 00:20:56.783 "config": [ 00:20:56.783 { 00:20:56.783 "method": "iobuf_set_options", 00:20:56.783 "params": { 00:20:56.783 "small_pool_count": 8192, 00:20:56.783 "large_pool_count": 1024, 00:20:56.783 "small_bufsize": 8192, 00:20:56.783 "large_bufsize": 135168 00:20:56.783 } 00:20:56.784 } 00:20:56.784 ] 00:20:56.784 }, 00:20:56.784 { 00:20:56.784 "subsystem": "sock", 00:20:56.784 "config": [ 00:20:56.784 { 00:20:56.784 "method": "sock_set_default_impl", 00:20:56.784 "params": { 00:20:56.784 "impl_name": "uring" 00:20:56.784 } 00:20:56.784 }, 00:20:56.784 { 00:20:56.784 "method": "sock_impl_set_options", 00:20:56.784 "params": { 00:20:56.784 "impl_name": "ssl", 00:20:56.784 "recv_buf_size": 4096, 00:20:56.784 "send_buf_size": 4096, 00:20:56.784 "enable_recv_pipe": true, 00:20:56.784 "enable_quickack": false, 00:20:56.784 "enable_placement_id": 0, 00:20:56.784 "enable_zerocopy_send_server": true, 00:20:56.784 "enable_zerocopy_send_client": false, 00:20:56.784 "zerocopy_threshold": 0, 00:20:56.784 "tls_version": 0, 00:20:56.784 "enable_ktls": false 00:20:56.784 } 00:20:56.784 }, 00:20:56.784 { 00:20:56.784 "method": "sock_impl_set_options", 00:20:56.784 "params": { 00:20:56.784 "impl_name": "posix", 00:20:56.784 "recv_buf_size": 2097152, 00:20:56.784 "send_buf_size": 2097152, 00:20:56.784 "enable_recv_pipe": true, 00:20:56.784 "enable_quickack": false, 00:20:56.784 "enable_placement_id": 0, 00:20:56.784 "enable_zerocopy_send_server": true, 00:20:56.784 "enable_zerocopy_send_client": false, 00:20:56.784 "zerocopy_threshold": 0, 00:20:56.784 "tls_version": 0, 00:20:56.784 "enable_ktls": false 00:20:56.784 } 00:20:56.784 }, 00:20:56.784 { 00:20:56.784 "method": "sock_impl_set_options", 00:20:56.784 "params": { 00:20:56.784 "impl_name": "uring", 00:20:56.784 "recv_buf_size": 2097152, 00:20:56.784 "send_buf_size": 2097152, 00:20:56.784 "enable_recv_pipe": true, 00:20:56.784 "enable_quickack": false, 00:20:56.784 "enable_placement_id": 0, 00:20:56.784 "enable_zerocopy_send_server": false, 00:20:56.784 "enable_zerocopy_send_client": false, 00:20:56.784 "zerocopy_threshold": 0, 00:20:56.784 "tls_version": 0, 00:20:56.784 "enable_ktls": false 00:20:56.784 } 00:20:56.784 } 00:20:56.784 ] 00:20:56.784 }, 00:20:56.784 { 00:20:56.784 "subsystem": "vmd", 00:20:56.784 "config": [] 00:20:56.784 }, 00:20:56.784 { 00:20:56.784 "subsystem": "accel", 00:20:56.784 "config": [ 00:20:56.784 { 00:20:56.784 "method": "accel_set_options", 00:20:56.784 "params": { 00:20:56.784 "small_cache_size": 128, 00:20:56.784 "large_cache_size": 16, 00:20:56.784 "task_count": 2048, 00:20:56.784 "sequence_count": 2048, 00:20:56.784 "buf_count": 2048 00:20:56.784 } 00:20:56.784 } 00:20:56.784 ] 00:20:56.784 }, 00:20:56.784 { 00:20:56.784 "subsystem": "bdev", 00:20:56.784 "config": [ 00:20:56.784 { 00:20:56.784 "method": "bdev_set_options", 00:20:56.784 "params": { 00:20:56.784 "bdev_io_pool_size": 65535, 00:20:56.784 "bdev_io_cache_size": 256, 00:20:56.784 "bdev_auto_examine": true, 00:20:56.784 "iobuf_small_cache_size": 128, 00:20:56.784 "iobuf_large_cache_size": 16 00:20:56.784 } 00:20:56.784 }, 00:20:56.784 { 00:20:56.784 "method": "bdev_raid_set_options", 00:20:56.784 "params": { 00:20:56.784 "process_window_size_kb": 1024 00:20:56.784 } 00:20:56.784 }, 00:20:56.784 { 00:20:56.784 "method": "bdev_iscsi_set_options", 00:20:56.784 "params": { 00:20:56.784 "timeout_sec": 30 00:20:56.784 } 00:20:56.784 }, 00:20:56.784 { 00:20:56.784 "method": "bdev_nvme_set_options", 00:20:56.784 "params": { 00:20:56.784 "action_on_timeout": "none", 00:20:56.784 "timeout_us": 0, 00:20:56.784 "timeout_admin_us": 0, 00:20:56.784 "keep_alive_timeout_ms": 10000, 00:20:56.784 "arbitration_burst": 0, 00:20:56.784 "low_priority_weight": 0, 00:20:56.784 "medium_priority_weight": 0, 00:20:56.784 "high_priority_weight": 0, 00:20:56.784 "nvme_adminq_poll_period_us": 10000, 00:20:56.784 "nvme_ioq_poll_period_us": 0, 00:20:56.784 "io_queue_requests": 512, 00:20:56.784 "delay_cmd_submit": true, 00:20:56.784 "transport_retry_count": 4, 00:20:56.784 "bdev_retry_count": 3, 00:20:56.784 "transport_ack_timeout": 0, 00:20:56.784 "ctrlr_loss_timeout_sec": 0, 00:20:56.784 "reconnect_delay_sec": 0, 00:20:56.784 "fast_io_fail_timeout_sec": 0, 00:20:56.784 "disable_auto_failback": false, 00:20:56.784 "generate_uuids": false, 00:20:56.784 "transport_tos": 0, 00:20:56.784 "nvme_error_stat": false, 00:20:56.784 "rdma_srq_size": 0, 00:20:56.784 "io_path_stat": false, 00:20:56.784 "allow_accel_sequence": false, 00:20:56.784 "rdma_max_cq_size": 0, 00:20:56.784 "rdma_cm_event_timeout_ms": 0, 00:20:56.784 "dhchap_digests": [ 00:20:56.784 "sha256", 00:20:56.784 "sha384", 00:20:56.784 "sha512" 00:20:56.784 ], 00:20:56.784 "dhchap_dhgroups": [ 00:20:56.784 "null", 00:20:56.784 "ffdhe2048", 00:20:56.784 "ffdhe3072", 00:20:56.784 "ffdhe4096", 00:20:56.784 "ffdhe6144", 00:20:56.784 "ffdhe8192" 00:20:56.784 ] 00:20:56.784 } 00:20:56.784 }, 00:20:56.784 { 00:20:56.784 "method": "bdev_nvme_attach_controller", 00:20:56.784 "params": { 00:20:56.784 "name": "nvme0", 00:20:56.784 "trtype": "TCP", 00:20:56.784 "adrfam": "IPv4", 00:20:56.784 "traddr": "127.0.0.1", 00:20:56.784 "trsvcid": "4420", 00:20:56.784 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:56.784 "prchk_reftag": false, 00:20:56.784 "prchk_guard": false, 00:20:56.784 "ctrlr_loss_timeout_sec": 0, 00:20:56.784 "reconnect_delay_sec": 0, 00:20:56.784 "fast_io_fail_timeout_sec": 0, 00:20:56.784 "psk": "key0", 00:20:56.784 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:56.784 "hdgst": false, 00:20:56.784 "ddgst": false 00:20:56.784 } 00:20:56.784 }, 00:20:56.784 { 00:20:56.784 "method": "bdev_nvme_set_hotplug", 00:20:56.784 "params": { 00:20:56.784 "period_us": 100000, 00:20:56.784 "enable": false 00:20:56.784 } 00:20:56.784 }, 00:20:56.784 { 00:20:56.784 "method": "bdev_wait_for_examine" 00:20:56.784 } 00:20:56.784 ] 00:20:56.784 }, 00:20:56.784 { 00:20:56.784 "subsystem": "nbd", 00:20:56.784 "config": [] 00:20:56.784 } 00:20:56.784 ] 00:20:56.784 }' 00:20:56.784 22:32:10 keyring_file -- keyring/file.sh@114 -- # killprocess 85034 00:20:56.784 22:32:10 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85034 ']' 00:20:56.784 22:32:10 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85034 00:20:56.784 22:32:10 keyring_file -- common/autotest_common.sh@953 -- # uname 00:20:56.784 22:32:10 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:56.784 22:32:10 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85034 00:20:56.784 killing process with pid 85034 00:20:56.784 Received shutdown signal, test time was about 1.000000 seconds 00:20:56.784 00:20:56.784 Latency(us) 00:20:56.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.784 =================================================================================================================== 00:20:56.784 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:56.784 22:32:10 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:56.784 22:32:10 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:56.784 22:32:10 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85034' 00:20:56.784 22:32:10 keyring_file -- common/autotest_common.sh@967 -- # kill 85034 00:20:56.784 22:32:10 keyring_file -- common/autotest_common.sh@972 -- # wait 85034 00:20:57.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:57.043 22:32:10 keyring_file -- keyring/file.sh@117 -- # bperfpid=85261 00:20:57.043 22:32:10 keyring_file -- keyring/file.sh@119 -- # waitforlisten 85261 /var/tmp/bperf.sock 00:20:57.043 22:32:10 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85261 ']' 00:20:57.043 22:32:10 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:57.043 22:32:10 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:57.043 22:32:10 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:57.043 22:32:10 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:20:57.043 22:32:10 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:20:57.043 "subsystems": [ 00:20:57.043 { 00:20:57.043 "subsystem": "keyring", 00:20:57.043 "config": [ 00:20:57.043 { 00:20:57.043 "method": "keyring_file_add_key", 00:20:57.043 "params": { 00:20:57.043 "name": "key0", 00:20:57.043 "path": "/tmp/tmp.c5w6xkv7tJ" 00:20:57.043 } 00:20:57.043 }, 00:20:57.043 { 00:20:57.043 "method": "keyring_file_add_key", 00:20:57.043 "params": { 00:20:57.043 "name": "key1", 00:20:57.043 "path": "/tmp/tmp.J7ZKgGthup" 00:20:57.043 } 00:20:57.043 } 00:20:57.043 ] 00:20:57.043 }, 00:20:57.043 { 00:20:57.043 "subsystem": "iobuf", 00:20:57.043 "config": [ 00:20:57.043 { 00:20:57.043 "method": "iobuf_set_options", 00:20:57.043 "params": { 00:20:57.043 "small_pool_count": 8192, 00:20:57.043 "large_pool_count": 1024, 00:20:57.043 "small_bufsize": 8192, 00:20:57.043 "large_bufsize": 135168 00:20:57.043 } 00:20:57.043 } 00:20:57.043 ] 00:20:57.043 }, 00:20:57.043 { 00:20:57.043 "subsystem": "sock", 00:20:57.043 "config": [ 00:20:57.043 { 00:20:57.043 "method": "sock_set_default_impl", 00:20:57.043 "params": { 00:20:57.043 "impl_name": "uring" 00:20:57.043 } 00:20:57.043 }, 00:20:57.043 { 00:20:57.043 "method": "sock_impl_set_options", 00:20:57.043 "params": { 00:20:57.043 "impl_name": "ssl", 00:20:57.043 "recv_buf_size": 4096, 00:20:57.043 "send_buf_size": 4096, 00:20:57.043 "enable_recv_pipe": true, 00:20:57.043 "enable_quickack": false, 00:20:57.044 "enable_placement_id": 0, 00:20:57.044 "enable_zerocopy_send_server": true, 00:20:57.044 "enable_zerocopy_send_client": false, 00:20:57.044 "zerocopy_threshold": 0, 00:20:57.044 "tls_version": 0, 00:20:57.044 "enable_ktls": false 00:20:57.044 } 00:20:57.044 }, 00:20:57.044 { 00:20:57.044 "method": "sock_impl_set_options", 00:20:57.044 "params": { 00:20:57.044 "impl_name": "posix", 00:20:57.044 "recv_buf_size": 2097152, 00:20:57.044 "send_buf_size": 2097152, 00:20:57.044 "enable_recv_pipe": true, 00:20:57.044 "enable_quickack": false, 00:20:57.044 "enable_placement_id": 0, 00:20:57.044 "enable_zerocopy_send_server": true, 00:20:57.044 "enable_zerocopy_send_client": false, 00:20:57.044 "zerocopy_threshold": 0, 00:20:57.044 "tls_version": 0, 00:20:57.044 "enable_ktls": false 00:20:57.044 } 00:20:57.044 }, 00:20:57.044 { 00:20:57.044 "method": "sock_impl_set_options", 00:20:57.044 "params": { 00:20:57.044 "impl_name": "uring", 00:20:57.044 "recv_buf_size": 2097152, 00:20:57.044 "send_buf_size": 2097152, 00:20:57.044 "enable_recv_pipe": true, 00:20:57.044 "enable_quickack": false, 00:20:57.044 "enable_placement_id": 0, 00:20:57.044 "enable_zerocopy_send_server": false, 00:20:57.044 "enable_zerocopy_send_client": false, 00:20:57.044 "zerocopy_threshold": 0, 00:20:57.044 "tls_version": 0, 00:20:57.044 "enable_ktls": false 00:20:57.044 } 00:20:57.044 } 00:20:57.044 ] 00:20:57.044 }, 00:20:57.044 { 00:20:57.044 "subsystem": "vmd", 00:20:57.044 "config": [] 00:20:57.044 }, 00:20:57.044 { 00:20:57.044 "subsystem": "accel", 00:20:57.044 "config": [ 00:20:57.044 { 00:20:57.044 "method": "accel_set_options", 00:20:57.044 "params": { 00:20:57.044 "small_cache_size": 128, 00:20:57.044 "large_cache_size": 16, 00:20:57.044 "task_count": 2048, 00:20:57.044 "sequence_count": 2048, 00:20:57.044 "buf_count": 2048 00:20:57.044 } 00:20:57.044 } 00:20:57.044 ] 00:20:57.044 }, 00:20:57.044 { 00:20:57.044 "subsystem": "bdev", 00:20:57.044 "config": [ 00:20:57.044 { 00:20:57.044 "method": "bdev_set_options", 00:20:57.044 "params": { 00:20:57.044 "bdev_io_pool_size": 65535, 00:20:57.044 "bdev_io_cache_size": 256, 00:20:57.044 "bdev_auto_examine": true, 00:20:57.044 "iobuf_small_cache_size": 128, 00:20:57.044 "iobuf_large_cache_size": 16 00:20:57.044 } 00:20:57.044 }, 00:20:57.044 { 00:20:57.044 "method": "bdev_raid_set_options", 00:20:57.044 "params": { 00:20:57.044 "process_window_size_kb": 1024 00:20:57.044 } 00:20:57.044 }, 00:20:57.044 { 00:20:57.044 "method": "bdev_iscsi_set_options", 00:20:57.044 "params": { 00:20:57.044 "timeout_sec": 30 00:20:57.044 } 00:20:57.044 }, 00:20:57.044 { 00:20:57.044 "method": "bdev_nvme_set_options", 00:20:57.044 "params": { 00:20:57.044 "action_on_timeout": "none", 00:20:57.044 "timeout_us": 0, 00:20:57.044 "timeout_admin_us": 0, 00:20:57.044 "keep_alive_timeout_ms": 10000, 00:20:57.044 "arbitration_burst": 0, 00:20:57.044 "low_priority_weight": 0, 00:20:57.044 "medium_priority_weight": 0, 00:20:57.044 "high_priority_weight": 0, 00:20:57.044 "nvme_adminq_poll_period_us": 10000, 00:20:57.044 "nvme_ioq_poll_period_us": 0, 00:20:57.044 "io_queue_requests": 512, 00:20:57.044 "delay_cmd_submit": true, 00:20:57.044 "transport_retry_count": 4, 00:20:57.044 "bdev_retry_count": 3, 00:20:57.044 "transport_ack_timeout": 0, 00:20:57.044 "ctrlr_loss_timeout_sec": 0, 00:20:57.044 "reconnect_delay_sec": 0, 00:20:57.044 "fast_io_fail_timeout_sec": 0, 00:20:57.044 "disable_auto_failback": false, 00:20:57.044 "generate_uuids": false, 00:20:57.044 "transport_tos": 0, 00:20:57.044 "nvme_error_stat": false, 00:20:57.044 "rdma_srq_size": 0, 00:20:57.044 "io_path_stat": false, 00:20:57.044 "allow_accel_sequence": false, 00:20:57.044 "rdma_max_cq_size": 0, 00:20:57.044 "rdma_cm_event_timeout_ms": 0, 00:20:57.044 "dhchap_digests": [ 00:20:57.044 "sha256", 00:20:57.044 "sha384", 00:20:57.044 "sha512" 00:20:57.044 ], 00:20:57.044 "dhchap_dhgroups": [ 00:20:57.044 "null", 00:20:57.044 "ffdhe2048", 00:20:57.044 "ffdhe3072", 00:20:57.044 "ffdhe4096", 00:20:57.044 "ffdhe6144", 00:20:57.044 "ffdhe8192" 00:20:57.044 ] 00:20:57.044 } 00:20:57.044 }, 00:20:57.044 { 00:20:57.044 "method": "bdev_nvme_attach_controller", 00:20:57.044 "params": { 00:20:57.044 "name": "nvme0", 00:20:57.044 "trtype": "TCP", 00:20:57.044 "adrfam": "IPv4", 00:20:57.044 "traddr": "127.0.0.1", 00:20:57.044 "trsvcid": "4420", 00:20:57.044 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:57.044 "prchk_reftag": false, 00:20:57.044 "prchk_guard": false, 00:20:57.044 "ctrlr_loss_timeout_sec": 0, 00:20:57.044 "reconnect_delay_sec": 0, 00:20:57.044 "fast_io_fail_timeout_sec": 0, 00:20:57.044 "psk": "key0", 00:20:57.044 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:57.044 "hdgst": false, 00:20:57.044 "ddgst": false 00:20:57.044 } 00:20:57.044 }, 00:20:57.044 { 00:20:57.044 "method": "bdev_nvme_set_hotplug", 00:20:57.044 "params": { 00:20:57.044 "period_us": 100000, 00:20:57.044 "enable": false 00:20:57.044 } 00:20:57.044 }, 00:20:57.044 { 00:20:57.044 "method": "bdev_wait_for_examine" 00:20:57.044 } 00:20:57.044 ] 00:20:57.044 }, 00:20:57.044 { 00:20:57.044 "subsystem": "nbd", 00:20:57.044 "config": [] 00:20:57.044 } 00:20:57.044 ] 00:20:57.044 }' 00:20:57.044 22:32:10 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:57.044 22:32:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:57.044 [2024-07-15 22:32:10.608151] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:20:57.044 [2024-07-15 22:32:10.608509] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85261 ] 00:20:57.309 [2024-07-15 22:32:10.754365] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.309 [2024-07-15 22:32:10.849621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.567 [2024-07-15 22:32:10.971588] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:57.567 [2024-07-15 22:32:11.019980] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:58.134 22:32:11 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:58.134 22:32:11 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:20:58.134 22:32:11 keyring_file -- keyring/file.sh@120 -- # jq length 00:20:58.134 22:32:11 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:20:58.134 22:32:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:58.134 22:32:11 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:20:58.134 22:32:11 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:20:58.134 22:32:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:58.134 22:32:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:58.134 22:32:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:58.134 22:32:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:58.134 22:32:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:58.392 22:32:11 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:20:58.392 22:32:11 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:20:58.392 22:32:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:58.392 22:32:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:58.392 22:32:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:58.392 22:32:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:58.392 22:32:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:58.651 22:32:12 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:20:58.651 22:32:12 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:20:58.651 22:32:12 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:20:58.651 22:32:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:20:58.910 22:32:12 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:20:58.910 22:32:12 keyring_file -- keyring/file.sh@1 -- # cleanup 00:20:58.911 22:32:12 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.c5w6xkv7tJ /tmp/tmp.J7ZKgGthup 00:20:58.911 22:32:12 keyring_file -- keyring/file.sh@20 -- # killprocess 85261 00:20:58.911 22:32:12 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85261 ']' 00:20:58.911 22:32:12 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85261 00:20:58.911 22:32:12 keyring_file -- common/autotest_common.sh@953 -- # uname 00:20:58.911 22:32:12 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:58.911 22:32:12 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85261 00:20:58.911 killing process with pid 85261 00:20:58.911 Received shutdown signal, test time was about 1.000000 seconds 00:20:58.911 00:20:58.911 Latency(us) 00:20:58.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.911 =================================================================================================================== 00:20:58.911 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:58.911 22:32:12 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:58.911 22:32:12 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:58.911 22:32:12 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85261' 00:20:58.911 22:32:12 keyring_file -- common/autotest_common.sh@967 -- # kill 85261 00:20:58.911 22:32:12 keyring_file -- common/autotest_common.sh@972 -- # wait 85261 00:20:59.169 22:32:12 keyring_file -- keyring/file.sh@21 -- # killprocess 85017 00:20:59.169 22:32:12 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85017 ']' 00:20:59.169 22:32:12 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85017 00:20:59.169 22:32:12 keyring_file -- common/autotest_common.sh@953 -- # uname 00:20:59.169 22:32:12 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:59.170 22:32:12 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85017 00:20:59.170 killing process with pid 85017 00:20:59.170 22:32:12 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:59.170 22:32:12 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:59.170 22:32:12 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85017' 00:20:59.170 22:32:12 keyring_file -- common/autotest_common.sh@967 -- # kill 85017 00:20:59.170 [2024-07-15 22:32:12.616565] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:59.170 22:32:12 keyring_file -- common/autotest_common.sh@972 -- # wait 85017 00:20:59.428 00:20:59.428 real 0m13.278s 00:20:59.428 user 0m31.743s 00:20:59.428 sys 0m3.207s 00:20:59.428 ************************************ 00:20:59.428 END TEST keyring_file 00:20:59.428 ************************************ 00:20:59.428 22:32:12 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:59.428 22:32:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:59.428 22:32:12 -- common/autotest_common.sh@1142 -- # return 0 00:20:59.428 22:32:12 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:20:59.428 22:32:12 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:20:59.428 22:32:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:59.428 22:32:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:59.428 22:32:12 -- common/autotest_common.sh@10 -- # set +x 00:20:59.428 ************************************ 00:20:59.428 START TEST keyring_linux 00:20:59.428 ************************************ 00:20:59.428 22:32:13 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:20:59.687 * Looking for test storage... 00:20:59.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:20:59.687 22:32:13 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:20:59.687 22:32:13 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:59.687 22:32:13 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:20:59.687 22:32:13 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:59.687 22:32:13 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:59.687 22:32:13 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:59.687 22:32:13 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:59.687 22:32:13 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:59.687 22:32:13 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:59.687 22:32:13 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:59.687 22:32:13 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:59.687 22:32:13 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:59.687 22:32:13 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:59.687 22:32:13 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:37374fe9-a847-4b40-94af-b766955abedc 00:20:59.687 22:32:13 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=37374fe9-a847-4b40-94af-b766955abedc 00:20:59.687 22:32:13 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:59.687 22:32:13 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:59.687 22:32:13 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:59.687 22:32:13 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:59.687 22:32:13 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:59.687 22:32:13 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:59.687 22:32:13 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:59.687 22:32:13 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:59.687 22:32:13 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.687 22:32:13 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.688 22:32:13 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.688 22:32:13 keyring_linux -- paths/export.sh@5 -- # export PATH 00:20:59.688 22:32:13 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.688 22:32:13 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:20:59.688 22:32:13 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:59.688 22:32:13 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:59.688 22:32:13 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:59.688 22:32:13 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:59.688 22:32:13 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:59.688 22:32:13 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:59.688 22:32:13 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:59.688 22:32:13 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:59.688 22:32:13 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:20:59.688 22:32:13 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:20:59.688 22:32:13 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:20:59.688 22:32:13 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:20:59.688 22:32:13 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:20:59.688 22:32:13 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:20:59.688 22:32:13 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:20:59.688 22:32:13 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:20:59.688 22:32:13 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:20:59.688 22:32:13 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:59.688 22:32:13 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:20:59.688 22:32:13 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:20:59.688 22:32:13 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:59.688 22:32:13 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:59.688 22:32:13 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:20:59.688 22:32:13 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:59.688 22:32:13 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:59.688 22:32:13 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:20:59.688 22:32:13 keyring_linux -- nvmf/common.sh@705 -- # python - 00:20:59.688 22:32:13 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:20:59.688 /tmp/:spdk-test:key0 00:20:59.688 22:32:13 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:20:59.688 22:32:13 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:20:59.688 22:32:13 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:20:59.688 22:32:13 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:20:59.688 22:32:13 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:20:59.688 22:32:13 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:20:59.688 22:32:13 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:20:59.688 22:32:13 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:20:59.688 22:32:13 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:20:59.688 22:32:13 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:20:59.688 22:32:13 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:59.688 22:32:13 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:20:59.688 22:32:13 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:20:59.688 22:32:13 keyring_linux -- nvmf/common.sh@705 -- # python - 00:20:59.688 22:32:13 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:20:59.688 /tmp/:spdk-test:key1 00:20:59.688 22:32:13 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:20:59.688 22:32:13 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85374 00:20:59.688 22:32:13 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85374 00:20:59.688 22:32:13 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:59.688 22:32:13 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85374 ']' 00:20:59.688 22:32:13 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.688 22:32:13 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:59.688 22:32:13 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.688 22:32:13 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:59.688 22:32:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:59.947 [2024-07-15 22:32:13.345200] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:20:59.947 [2024-07-15 22:32:13.345286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85374 ] 00:20:59.947 [2024-07-15 22:32:13.493416] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.206 [2024-07-15 22:32:13.589407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.206 [2024-07-15 22:32:13.630697] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:00.774 22:32:14 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:00.774 22:32:14 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:21:00.774 22:32:14 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:21:00.774 22:32:14 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.774 22:32:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:00.774 [2024-07-15 22:32:14.173752] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.774 null0 00:21:00.774 [2024-07-15 22:32:14.205694] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:00.774 [2024-07-15 22:32:14.205896] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:00.774 22:32:14 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.774 22:32:14 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:21:00.774 695940693 00:21:00.774 22:32:14 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:21:00.774 934219703 00:21:00.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:00.774 22:32:14 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85392 00:21:00.774 22:32:14 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:21:00.774 22:32:14 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85392 /var/tmp/bperf.sock 00:21:00.774 22:32:14 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85392 ']' 00:21:00.774 22:32:14 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:00.774 22:32:14 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:00.774 22:32:14 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:00.774 22:32:14 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:00.774 22:32:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:00.774 [2024-07-15 22:32:14.291984] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:21:00.774 [2024-07-15 22:32:14.292227] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85392 ] 00:21:01.034 [2024-07-15 22:32:14.431970] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.034 [2024-07-15 22:32:14.526549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.602 22:32:15 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:01.602 22:32:15 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:21:01.602 22:32:15 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:21:01.602 22:32:15 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:21:01.861 22:32:15 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:21:01.861 22:32:15 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:02.121 [2024-07-15 22:32:15.522197] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:02.121 22:32:15 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:02.121 22:32:15 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:02.121 [2024-07-15 22:32:15.752984] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:02.380 nvme0n1 00:21:02.380 22:32:15 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:21:02.380 22:32:15 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:21:02.380 22:32:15 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:02.380 22:32:15 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:02.380 22:32:15 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:02.380 22:32:15 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:02.639 22:32:16 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:21:02.639 22:32:16 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:02.639 22:32:16 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:21:02.639 22:32:16 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:21:02.639 22:32:16 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:02.639 22:32:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:02.639 22:32:16 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:21:02.639 22:32:16 keyring_linux -- keyring/linux.sh@25 -- # sn=695940693 00:21:02.639 22:32:16 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:21:02.639 22:32:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:02.639 22:32:16 keyring_linux -- keyring/linux.sh@26 -- # [[ 695940693 == \6\9\5\9\4\0\6\9\3 ]] 00:21:02.639 22:32:16 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 695940693 00:21:02.639 22:32:16 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:21:02.639 22:32:16 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:02.898 Running I/O for 1 seconds... 00:21:03.835 00:21:03.835 Latency(us) 00:21:03.835 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.835 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:03.835 nvme0n1 : 1.01 18442.57 72.04 0.00 0.00 6912.33 5711.37 12791.36 00:21:03.835 =================================================================================================================== 00:21:03.835 Total : 18442.57 72.04 0.00 0.00 6912.33 5711.37 12791.36 00:21:03.835 0 00:21:03.835 22:32:17 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:03.835 22:32:17 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:04.095 22:32:17 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:21:04.095 22:32:17 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:21:04.095 22:32:17 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:04.095 22:32:17 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:04.095 22:32:17 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:04.095 22:32:17 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:04.354 22:32:17 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:21:04.354 22:32:17 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:04.354 22:32:17 keyring_linux -- keyring/linux.sh@23 -- # return 00:21:04.354 22:32:17 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:04.354 22:32:17 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:21:04.354 22:32:17 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:04.354 22:32:17 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:04.354 22:32:17 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:04.354 22:32:17 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:04.354 22:32:17 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:04.354 22:32:17 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:04.355 22:32:17 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:04.614 [2024-07-15 22:32:17.991291] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:04.614 [2024-07-15 22:32:17.991636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9275f0 (107): Transport endpoint is not connected 00:21:04.614 [2024-07-15 22:32:17.992625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9275f0 (9): Bad file descriptor 00:21:04.614 [2024-07-15 22:32:17.993622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:04.614 [2024-07-15 22:32:17.993642] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:04.614 [2024-07-15 22:32:17.993651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:04.614 request: 00:21:04.614 { 00:21:04.614 "name": "nvme0", 00:21:04.614 "trtype": "tcp", 00:21:04.614 "traddr": "127.0.0.1", 00:21:04.614 "adrfam": "ipv4", 00:21:04.614 "trsvcid": "4420", 00:21:04.614 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:04.614 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:04.614 "prchk_reftag": false, 00:21:04.614 "prchk_guard": false, 00:21:04.614 "hdgst": false, 00:21:04.614 "ddgst": false, 00:21:04.614 "psk": ":spdk-test:key1", 00:21:04.614 "method": "bdev_nvme_attach_controller", 00:21:04.614 "req_id": 1 00:21:04.614 } 00:21:04.614 Got JSON-RPC error response 00:21:04.614 response: 00:21:04.614 { 00:21:04.614 "code": -5, 00:21:04.614 "message": "Input/output error" 00:21:04.614 } 00:21:04.614 22:32:18 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:21:04.614 22:32:18 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:04.614 22:32:18 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:04.614 22:32:18 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:04.614 22:32:18 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:21:04.614 22:32:18 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:04.614 22:32:18 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:21:04.614 22:32:18 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:21:04.614 22:32:18 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:21:04.614 22:32:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:04.614 22:32:18 keyring_linux -- keyring/linux.sh@33 -- # sn=695940693 00:21:04.614 22:32:18 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 695940693 00:21:04.614 1 links removed 00:21:04.614 22:32:18 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:04.614 22:32:18 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:21:04.614 22:32:18 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:21:04.614 22:32:18 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:21:04.614 22:32:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:21:04.614 22:32:18 keyring_linux -- keyring/linux.sh@33 -- # sn=934219703 00:21:04.614 22:32:18 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 934219703 00:21:04.614 1 links removed 00:21:04.614 22:32:18 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85392 00:21:04.614 22:32:18 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85392 ']' 00:21:04.614 22:32:18 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85392 00:21:04.614 22:32:18 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:21:04.614 22:32:18 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:04.614 22:32:18 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85392 00:21:04.614 killing process with pid 85392 00:21:04.614 Received shutdown signal, test time was about 1.000000 seconds 00:21:04.614 00:21:04.614 Latency(us) 00:21:04.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.614 =================================================================================================================== 00:21:04.614 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:04.614 22:32:18 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:04.614 22:32:18 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:04.614 22:32:18 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85392' 00:21:04.614 22:32:18 keyring_linux -- common/autotest_common.sh@967 -- # kill 85392 00:21:04.614 22:32:18 keyring_linux -- common/autotest_common.sh@972 -- # wait 85392 00:21:04.875 22:32:18 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85374 00:21:04.875 22:32:18 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85374 ']' 00:21:04.875 22:32:18 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85374 00:21:04.875 22:32:18 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:21:04.875 22:32:18 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:04.875 22:32:18 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85374 00:21:04.875 killing process with pid 85374 00:21:04.875 22:32:18 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:04.875 22:32:18 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:04.875 22:32:18 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85374' 00:21:04.875 22:32:18 keyring_linux -- common/autotest_common.sh@967 -- # kill 85374 00:21:04.875 22:32:18 keyring_linux -- common/autotest_common.sh@972 -- # wait 85374 00:21:05.151 00:21:05.151 real 0m5.598s 00:21:05.151 user 0m10.167s 00:21:05.151 sys 0m1.631s 00:21:05.151 ************************************ 00:21:05.151 END TEST keyring_linux 00:21:05.151 ************************************ 00:21:05.151 22:32:18 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:05.151 22:32:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:05.151 22:32:18 -- common/autotest_common.sh@1142 -- # return 0 00:21:05.151 22:32:18 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:21:05.151 22:32:18 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:21:05.151 22:32:18 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:21:05.151 22:32:18 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:21:05.151 22:32:18 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:21:05.151 22:32:18 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:21:05.151 22:32:18 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:21:05.151 22:32:18 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:21:05.151 22:32:18 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:21:05.151 22:32:18 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:21:05.151 22:32:18 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:21:05.151 22:32:18 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:21:05.151 22:32:18 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:21:05.151 22:32:18 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:21:05.151 22:32:18 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:21:05.151 22:32:18 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:21:05.151 22:32:18 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:21:05.151 22:32:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:05.151 22:32:18 -- common/autotest_common.sh@10 -- # set +x 00:21:05.151 22:32:18 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:21:05.151 22:32:18 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:21:05.151 22:32:18 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:21:05.151 22:32:18 -- common/autotest_common.sh@10 -- # set +x 00:21:07.722 INFO: APP EXITING 00:21:07.722 INFO: killing all VMs 00:21:07.722 INFO: killing vhost app 00:21:07.722 INFO: EXIT DONE 00:21:08.291 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:08.291 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:21:08.291 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:21:09.223 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:09.223 Cleaning 00:21:09.223 Removing: /var/run/dpdk/spdk0/config 00:21:09.223 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:09.223 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:09.223 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:09.223 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:09.223 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:09.223 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:09.223 Removing: /var/run/dpdk/spdk1/config 00:21:09.223 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:09.223 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:09.223 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:09.223 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:09.223 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:09.223 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:09.224 Removing: /var/run/dpdk/spdk2/config 00:21:09.224 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:09.224 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:09.224 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:09.224 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:09.224 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:09.224 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:09.224 Removing: /var/run/dpdk/spdk3/config 00:21:09.224 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:09.224 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:09.224 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:09.224 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:09.224 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:09.224 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:09.224 Removing: /var/run/dpdk/spdk4/config 00:21:09.224 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:09.224 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:09.224 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:09.482 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:09.482 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:09.482 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:09.482 Removing: /dev/shm/nvmf_trace.0 00:21:09.482 Removing: /dev/shm/spdk_tgt_trace.pid58792 00:21:09.482 Removing: /var/run/dpdk/spdk0 00:21:09.482 Removing: /var/run/dpdk/spdk1 00:21:09.482 Removing: /var/run/dpdk/spdk2 00:21:09.482 Removing: /var/run/dpdk/spdk3 00:21:09.482 Removing: /var/run/dpdk/spdk4 00:21:09.482 Removing: /var/run/dpdk/spdk_pid58646 00:21:09.482 Removing: /var/run/dpdk/spdk_pid58792 00:21:09.482 Removing: /var/run/dpdk/spdk_pid58989 00:21:09.482 Removing: /var/run/dpdk/spdk_pid59076 00:21:09.482 Removing: /var/run/dpdk/spdk_pid59103 00:21:09.482 Removing: /var/run/dpdk/spdk_pid59207 00:21:09.482 Removing: /var/run/dpdk/spdk_pid59225 00:21:09.482 Removing: /var/run/dpdk/spdk_pid59349 00:21:09.483 Removing: /var/run/dpdk/spdk_pid59528 00:21:09.483 Removing: /var/run/dpdk/spdk_pid59669 00:21:09.483 Removing: /var/run/dpdk/spdk_pid59740 00:21:09.483 Removing: /var/run/dpdk/spdk_pid59816 00:21:09.483 Removing: /var/run/dpdk/spdk_pid59908 00:21:09.483 Removing: /var/run/dpdk/spdk_pid59984 00:21:09.483 Removing: /var/run/dpdk/spdk_pid60017 00:21:09.483 Removing: /var/run/dpdk/spdk_pid60058 00:21:09.483 Removing: /var/run/dpdk/spdk_pid60114 00:21:09.483 Removing: /var/run/dpdk/spdk_pid60219 00:21:09.483 Removing: /var/run/dpdk/spdk_pid60635 00:21:09.483 Removing: /var/run/dpdk/spdk_pid60687 00:21:09.483 Removing: /var/run/dpdk/spdk_pid60733 00:21:09.483 Removing: /var/run/dpdk/spdk_pid60749 00:21:09.483 Removing: /var/run/dpdk/spdk_pid60816 00:21:09.483 Removing: /var/run/dpdk/spdk_pid60832 00:21:09.483 Removing: /var/run/dpdk/spdk_pid60899 00:21:09.483 Removing: /var/run/dpdk/spdk_pid60909 00:21:09.483 Removing: /var/run/dpdk/spdk_pid60955 00:21:09.483 Removing: /var/run/dpdk/spdk_pid60973 00:21:09.483 Removing: /var/run/dpdk/spdk_pid61013 00:21:09.483 Removing: /var/run/dpdk/spdk_pid61031 00:21:09.483 Removing: /var/run/dpdk/spdk_pid61159 00:21:09.483 Removing: /var/run/dpdk/spdk_pid61199 00:21:09.483 Removing: /var/run/dpdk/spdk_pid61269 00:21:09.483 Removing: /var/run/dpdk/spdk_pid61326 00:21:09.483 Removing: /var/run/dpdk/spdk_pid61351 00:21:09.483 Removing: /var/run/dpdk/spdk_pid61415 00:21:09.483 Removing: /var/run/dpdk/spdk_pid61449 00:21:09.483 Removing: /var/run/dpdk/spdk_pid61489 00:21:09.483 Removing: /var/run/dpdk/spdk_pid61525 00:21:09.483 Removing: /var/run/dpdk/spdk_pid61559 00:21:09.483 Removing: /var/run/dpdk/spdk_pid61594 00:21:09.483 Removing: /var/run/dpdk/spdk_pid61637 00:21:09.483 Removing: /var/run/dpdk/spdk_pid61671 00:21:09.483 Removing: /var/run/dpdk/spdk_pid61706 00:21:09.483 Removing: /var/run/dpdk/spdk_pid61740 00:21:09.483 Removing: /var/run/dpdk/spdk_pid61775 00:21:09.483 Removing: /var/run/dpdk/spdk_pid61809 00:21:09.483 Removing: /var/run/dpdk/spdk_pid61845 00:21:09.741 Removing: /var/run/dpdk/spdk_pid61885 00:21:09.741 Removing: /var/run/dpdk/spdk_pid61916 00:21:09.741 Removing: /var/run/dpdk/spdk_pid61956 00:21:09.741 Removing: /var/run/dpdk/spdk_pid61996 00:21:09.741 Removing: /var/run/dpdk/spdk_pid62034 00:21:09.741 Removing: /var/run/dpdk/spdk_pid62071 00:21:09.741 Removing: /var/run/dpdk/spdk_pid62111 00:21:09.741 Removing: /var/run/dpdk/spdk_pid62147 00:21:09.741 Removing: /var/run/dpdk/spdk_pid62217 00:21:09.741 Removing: /var/run/dpdk/spdk_pid62310 00:21:09.741 Removing: /var/run/dpdk/spdk_pid62618 00:21:09.741 Removing: /var/run/dpdk/spdk_pid62635 00:21:09.741 Removing: /var/run/dpdk/spdk_pid62672 00:21:09.741 Removing: /var/run/dpdk/spdk_pid62685 00:21:09.741 Removing: /var/run/dpdk/spdk_pid62701 00:21:09.741 Removing: /var/run/dpdk/spdk_pid62725 00:21:09.741 Removing: /var/run/dpdk/spdk_pid62739 00:21:09.741 Removing: /var/run/dpdk/spdk_pid62761 00:21:09.741 Removing: /var/run/dpdk/spdk_pid62781 00:21:09.741 Removing: /var/run/dpdk/spdk_pid62799 00:21:09.741 Removing: /var/run/dpdk/spdk_pid62820 00:21:09.741 Removing: /var/run/dpdk/spdk_pid62839 00:21:09.741 Removing: /var/run/dpdk/spdk_pid62858 00:21:09.741 Removing: /var/run/dpdk/spdk_pid62879 00:21:09.741 Removing: /var/run/dpdk/spdk_pid62898 00:21:09.741 Removing: /var/run/dpdk/spdk_pid62917 00:21:09.741 Removing: /var/run/dpdk/spdk_pid62938 00:21:09.741 Removing: /var/run/dpdk/spdk_pid62957 00:21:09.741 Removing: /var/run/dpdk/spdk_pid62976 00:21:09.741 Removing: /var/run/dpdk/spdk_pid62992 00:21:09.741 Removing: /var/run/dpdk/spdk_pid63028 00:21:09.741 Removing: /var/run/dpdk/spdk_pid63047 00:21:09.741 Removing: /var/run/dpdk/spdk_pid63076 00:21:09.741 Removing: /var/run/dpdk/spdk_pid63146 00:21:09.741 Removing: /var/run/dpdk/spdk_pid63174 00:21:09.741 Removing: /var/run/dpdk/spdk_pid63189 00:21:09.741 Removing: /var/run/dpdk/spdk_pid63218 00:21:09.741 Removing: /var/run/dpdk/spdk_pid63233 00:21:09.741 Removing: /var/run/dpdk/spdk_pid63246 00:21:09.742 Removing: /var/run/dpdk/spdk_pid63288 00:21:09.742 Removing: /var/run/dpdk/spdk_pid63302 00:21:09.742 Removing: /var/run/dpdk/spdk_pid63336 00:21:09.742 Removing: /var/run/dpdk/spdk_pid63351 00:21:09.742 Removing: /var/run/dpdk/spdk_pid63360 00:21:09.742 Removing: /var/run/dpdk/spdk_pid63370 00:21:09.742 Removing: /var/run/dpdk/spdk_pid63385 00:21:09.742 Removing: /var/run/dpdk/spdk_pid63400 00:21:09.742 Removing: /var/run/dpdk/spdk_pid63404 00:21:09.742 Removing: /var/run/dpdk/spdk_pid63419 00:21:09.742 Removing: /var/run/dpdk/spdk_pid63453 00:21:09.742 Removing: /var/run/dpdk/spdk_pid63485 00:21:09.742 Removing: /var/run/dpdk/spdk_pid63500 00:21:09.742 Removing: /var/run/dpdk/spdk_pid63523 00:21:09.742 Removing: /var/run/dpdk/spdk_pid63538 00:21:09.742 Removing: /var/run/dpdk/spdk_pid63551 00:21:09.742 Removing: /var/run/dpdk/spdk_pid63597 00:21:09.742 Removing: /var/run/dpdk/spdk_pid63609 00:21:09.742 Removing: /var/run/dpdk/spdk_pid63635 00:21:09.742 Removing: /var/run/dpdk/spdk_pid63649 00:21:09.742 Removing: /var/run/dpdk/spdk_pid63662 00:21:09.742 Removing: /var/run/dpdk/spdk_pid63670 00:21:09.742 Removing: /var/run/dpdk/spdk_pid63678 00:21:10.000 Removing: /var/run/dpdk/spdk_pid63690 00:21:10.000 Removing: /var/run/dpdk/spdk_pid63698 00:21:10.000 Removing: /var/run/dpdk/spdk_pid63711 00:21:10.000 Removing: /var/run/dpdk/spdk_pid63785 00:21:10.000 Removing: /var/run/dpdk/spdk_pid63827 00:21:10.000 Removing: /var/run/dpdk/spdk_pid63937 00:21:10.000 Removing: /var/run/dpdk/spdk_pid63978 00:21:10.000 Removing: /var/run/dpdk/spdk_pid64021 00:21:10.000 Removing: /var/run/dpdk/spdk_pid64041 00:21:10.000 Removing: /var/run/dpdk/spdk_pid64063 00:21:10.000 Removing: /var/run/dpdk/spdk_pid64083 00:21:10.000 Removing: /var/run/dpdk/spdk_pid64121 00:21:10.000 Removing: /var/run/dpdk/spdk_pid64142 00:21:10.000 Removing: /var/run/dpdk/spdk_pid64212 00:21:10.000 Removing: /var/run/dpdk/spdk_pid64234 00:21:10.000 Removing: /var/run/dpdk/spdk_pid64284 00:21:10.000 Removing: /var/run/dpdk/spdk_pid64371 00:21:10.000 Removing: /var/run/dpdk/spdk_pid64427 00:21:10.000 Removing: /var/run/dpdk/spdk_pid64455 00:21:10.001 Removing: /var/run/dpdk/spdk_pid64553 00:21:10.001 Removing: /var/run/dpdk/spdk_pid64600 00:21:10.001 Removing: /var/run/dpdk/spdk_pid64633 00:21:10.001 Removing: /var/run/dpdk/spdk_pid64857 00:21:10.001 Removing: /var/run/dpdk/spdk_pid64955 00:21:10.001 Removing: /var/run/dpdk/spdk_pid64989 00:21:10.001 Removing: /var/run/dpdk/spdk_pid65310 00:21:10.001 Removing: /var/run/dpdk/spdk_pid65346 00:21:10.001 Removing: /var/run/dpdk/spdk_pid65638 00:21:10.001 Removing: /var/run/dpdk/spdk_pid66041 00:21:10.001 Removing: /var/run/dpdk/spdk_pid66299 00:21:10.001 Removing: /var/run/dpdk/spdk_pid67073 00:21:10.001 Removing: /var/run/dpdk/spdk_pid67889 00:21:10.001 Removing: /var/run/dpdk/spdk_pid68005 00:21:10.001 Removing: /var/run/dpdk/spdk_pid68073 00:21:10.001 Removing: /var/run/dpdk/spdk_pid69328 00:21:10.001 Removing: /var/run/dpdk/spdk_pid69530 00:21:10.001 Removing: /var/run/dpdk/spdk_pid72610 00:21:10.001 Removing: /var/run/dpdk/spdk_pid72909 00:21:10.001 Removing: /var/run/dpdk/spdk_pid73017 00:21:10.001 Removing: /var/run/dpdk/spdk_pid73145 00:21:10.001 Removing: /var/run/dpdk/spdk_pid73171 00:21:10.001 Removing: /var/run/dpdk/spdk_pid73200 00:21:10.001 Removing: /var/run/dpdk/spdk_pid73224 00:21:10.001 Removing: /var/run/dpdk/spdk_pid73311 00:21:10.001 Removing: /var/run/dpdk/spdk_pid73444 00:21:10.001 Removing: /var/run/dpdk/spdk_pid73584 00:21:10.001 Removing: /var/run/dpdk/spdk_pid73659 00:21:10.001 Removing: /var/run/dpdk/spdk_pid73841 00:21:10.001 Removing: /var/run/dpdk/spdk_pid73919 00:21:10.001 Removing: /var/run/dpdk/spdk_pid74006 00:21:10.001 Removing: /var/run/dpdk/spdk_pid74314 00:21:10.001 Removing: /var/run/dpdk/spdk_pid74699 00:21:10.001 Removing: /var/run/dpdk/spdk_pid74701 00:21:10.001 Removing: /var/run/dpdk/spdk_pid74976 00:21:10.001 Removing: /var/run/dpdk/spdk_pid74990 00:21:10.001 Removing: /var/run/dpdk/spdk_pid75004 00:21:10.001 Removing: /var/run/dpdk/spdk_pid75039 00:21:10.001 Removing: /var/run/dpdk/spdk_pid75045 00:21:10.001 Removing: /var/run/dpdk/spdk_pid75339 00:21:10.260 Removing: /var/run/dpdk/spdk_pid75388 00:21:10.260 Removing: /var/run/dpdk/spdk_pid75662 00:21:10.260 Removing: /var/run/dpdk/spdk_pid75859 00:21:10.260 Removing: /var/run/dpdk/spdk_pid76233 00:21:10.260 Removing: /var/run/dpdk/spdk_pid76725 00:21:10.260 Removing: /var/run/dpdk/spdk_pid77498 00:21:10.260 Removing: /var/run/dpdk/spdk_pid78088 00:21:10.260 Removing: /var/run/dpdk/spdk_pid78092 00:21:10.260 Removing: /var/run/dpdk/spdk_pid79972 00:21:10.260 Removing: /var/run/dpdk/spdk_pid80032 00:21:10.260 Removing: /var/run/dpdk/spdk_pid80087 00:21:10.260 Removing: /var/run/dpdk/spdk_pid80147 00:21:10.260 Removing: /var/run/dpdk/spdk_pid80262 00:21:10.260 Removing: /var/run/dpdk/spdk_pid80317 00:21:10.260 Removing: /var/run/dpdk/spdk_pid80377 00:21:10.260 Removing: /var/run/dpdk/spdk_pid80426 00:21:10.260 Removing: /var/run/dpdk/spdk_pid80743 00:21:10.260 Removing: /var/run/dpdk/spdk_pid81896 00:21:10.260 Removing: /var/run/dpdk/spdk_pid82030 00:21:10.260 Removing: /var/run/dpdk/spdk_pid82272 00:21:10.260 Removing: /var/run/dpdk/spdk_pid82833 00:21:10.260 Removing: /var/run/dpdk/spdk_pid82996 00:21:10.260 Removing: /var/run/dpdk/spdk_pid83154 00:21:10.260 Removing: /var/run/dpdk/spdk_pid83251 00:21:10.260 Removing: /var/run/dpdk/spdk_pid83416 00:21:10.260 Removing: /var/run/dpdk/spdk_pid83531 00:21:10.260 Removing: /var/run/dpdk/spdk_pid84192 00:21:10.260 Removing: /var/run/dpdk/spdk_pid84227 00:21:10.260 Removing: /var/run/dpdk/spdk_pid84262 00:21:10.260 Removing: /var/run/dpdk/spdk_pid84521 00:21:10.260 Removing: /var/run/dpdk/spdk_pid84551 00:21:10.260 Removing: /var/run/dpdk/spdk_pid84586 00:21:10.260 Removing: /var/run/dpdk/spdk_pid85017 00:21:10.260 Removing: /var/run/dpdk/spdk_pid85034 00:21:10.260 Removing: /var/run/dpdk/spdk_pid85261 00:21:10.260 Removing: /var/run/dpdk/spdk_pid85374 00:21:10.260 Removing: /var/run/dpdk/spdk_pid85392 00:21:10.260 Clean 00:21:10.260 22:32:23 -- common/autotest_common.sh@1451 -- # return 0 00:21:10.260 22:32:23 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:21:10.260 22:32:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:10.260 22:32:23 -- common/autotest_common.sh@10 -- # set +x 00:21:10.519 22:32:23 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:21:10.519 22:32:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:10.519 22:32:23 -- common/autotest_common.sh@10 -- # set +x 00:21:10.519 22:32:23 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:10.519 22:32:23 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:10.519 22:32:23 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:10.519 22:32:23 -- spdk/autotest.sh@391 -- # hash lcov 00:21:10.519 22:32:23 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:21:10.519 22:32:23 -- spdk/autotest.sh@393 -- # hostname 00:21:10.519 22:32:24 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:10.778 geninfo: WARNING: invalid characters removed from testname! 00:21:37.396 22:32:49 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:39.299 22:32:52 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:41.196 22:32:54 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:43.751 22:32:57 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:45.649 22:32:59 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:48.178 22:33:01 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:50.074 22:33:03 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:50.074 22:33:03 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:50.074 22:33:03 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:21:50.074 22:33:03 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:50.074 22:33:03 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:50.074 22:33:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.074 22:33:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.074 22:33:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.074 22:33:03 -- paths/export.sh@5 -- $ export PATH 00:21:50.074 22:33:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.074 22:33:03 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:21:50.074 22:33:03 -- common/autobuild_common.sh@444 -- $ date +%s 00:21:50.074 22:33:03 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721082783.XXXXXX 00:21:50.074 22:33:03 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721082783.Yg9yFd 00:21:50.074 22:33:03 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:21:50.074 22:33:03 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:21:50.074 22:33:03 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:21:50.074 22:33:03 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:21:50.074 22:33:03 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:21:50.074 22:33:03 -- common/autobuild_common.sh@460 -- $ get_config_params 00:21:50.074 22:33:03 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:21:50.074 22:33:03 -- common/autotest_common.sh@10 -- $ set +x 00:21:50.074 22:33:03 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:21:50.074 22:33:03 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:21:50.074 22:33:03 -- pm/common@17 -- $ local monitor 00:21:50.074 22:33:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:50.075 22:33:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:50.075 22:33:03 -- pm/common@25 -- $ sleep 1 00:21:50.075 22:33:03 -- pm/common@21 -- $ date +%s 00:21:50.075 22:33:03 -- pm/common@21 -- $ date +%s 00:21:50.075 22:33:03 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721082783 00:21:50.075 22:33:03 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721082783 00:21:50.075 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721082783_collect-vmstat.pm.log 00:21:50.075 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721082783_collect-cpu-load.pm.log 00:21:51.008 22:33:04 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:21:51.008 22:33:04 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:21:51.008 22:33:04 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:21:51.008 22:33:04 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:21:51.008 22:33:04 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:21:51.008 22:33:04 -- spdk/autopackage.sh@19 -- $ timing_finish 00:21:51.008 22:33:04 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:51.008 22:33:04 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:21:51.008 22:33:04 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:51.008 22:33:04 -- spdk/autopackage.sh@20 -- $ exit 0 00:21:51.008 22:33:04 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:21:51.008 22:33:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:21:51.008 22:33:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:21:51.008 22:33:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:51.008 22:33:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:21:51.008 22:33:04 -- pm/common@44 -- $ pid=87188 00:21:51.008 22:33:04 -- pm/common@50 -- $ kill -TERM 87188 00:21:51.008 22:33:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:51.008 22:33:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:21:51.008 22:33:04 -- pm/common@44 -- $ pid=87190 00:21:51.008 22:33:04 -- pm/common@50 -- $ kill -TERM 87190 00:21:51.008 + [[ -n 5107 ]] 00:21:51.008 + sudo kill 5107 00:21:51.017 [Pipeline] } 00:21:51.034 [Pipeline] // timeout 00:21:51.040 [Pipeline] } 00:21:51.057 [Pipeline] // stage 00:21:51.063 [Pipeline] } 00:21:51.080 [Pipeline] // catchError 00:21:51.089 [Pipeline] stage 00:21:51.091 [Pipeline] { (Stop VM) 00:21:51.105 [Pipeline] sh 00:21:51.383 + vagrant halt 00:21:54.712 ==> default: Halting domain... 00:22:01.286 [Pipeline] sh 00:22:01.618 + vagrant destroy -f 00:22:04.904 ==> default: Removing domain... 00:22:04.917 [Pipeline] sh 00:22:05.198 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/output 00:22:05.209 [Pipeline] } 00:22:05.228 [Pipeline] // stage 00:22:05.235 [Pipeline] } 00:22:05.253 [Pipeline] // dir 00:22:05.259 [Pipeline] } 00:22:05.279 [Pipeline] // wrap 00:22:05.286 [Pipeline] } 00:22:05.304 [Pipeline] // catchError 00:22:05.314 [Pipeline] stage 00:22:05.317 [Pipeline] { (Epilogue) 00:22:05.332 [Pipeline] sh 00:22:05.633 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:10.911 [Pipeline] catchError 00:22:10.913 [Pipeline] { 00:22:10.930 [Pipeline] sh 00:22:11.212 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:11.472 Artifacts sizes are good 00:22:11.480 [Pipeline] } 00:22:11.498 [Pipeline] // catchError 00:22:11.511 [Pipeline] archiveArtifacts 00:22:11.517 Archiving artifacts 00:22:11.674 [Pipeline] cleanWs 00:22:11.685 [WS-CLEANUP] Deleting project workspace... 00:22:11.685 [WS-CLEANUP] Deferred wipeout is used... 00:22:11.692 [WS-CLEANUP] done 00:22:11.694 [Pipeline] } 00:22:11.715 [Pipeline] // stage 00:22:11.721 [Pipeline] } 00:22:11.742 [Pipeline] // node 00:22:11.748 [Pipeline] End of Pipeline 00:22:11.789 Finished: SUCCESS